WorldWideScience

Sample records for streaming array computer

  1. Battling memory requirements of array programming through streaming

    DEFF Research Database (Denmark)

    Kristensen, Mads Ruben Burgdorff; Avery, James Emil; Blum, Troels

    2016-01-01

    A barrier to efficient array programming, for example in Python/NumPy, is that algorithms written as pure array operations completely without loops, while most efficient on small input, can lead to explosions in memory use. The present paper presents a solution to this problem using array streaming......, implemented in the automatic parallelization high-performance framework Bohrium. This makes it possible to use array programming in Python/NumPy code directly, even when the apparent memory requirement exceeds the machine capacity, since the automatic streaming eliminates the temporary memory overhead...... by performing calculations in per-thread registers. Using Bohrium, we automatically fuse, JIT-compile, and execute NumPy array operations on GPGPUs without modification to the user programs. We present performance evaluations of three benchmarks, all of which show dramatic reductions in memory use from...

  2. Java parallel secure stream for grid computing

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Chen, Y.; Watson, W.

    2001-01-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. The authors present a pure Java package called JPARSS (Java Parallel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addition X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed

  3. Symbol Stream Combining Versus Baseband Combining for Telemetry Arraying

    Science.gov (United States)

    Divsalar, D.

    1983-01-01

    The objectives of this article are to investigate and analyze the problem of combining symbol streams from many Deep Space Network stations to enhance bit signal-to-noise ratio and to compare the performance of this combining technique with baseband combining. Symbol stream combining (SSC) has some advantages and some disadvantages over baseband combining (BBC). The SSC suffers almost no loss in combining the digital data and no loss due to the transmission of the digital data by microwave links between the stations. The BBC suffers 0.2 dB loss due to alignment and combining the IF signals and 0.2 dB loss due to transmission of signals by microwave links. On the other hand, the losses in the subcarrier demodulation assembly (SDA) and in the symbol synchronization assembly (SSA) for SSC are more than the losses in the SDA and SSA for BBC. It is shown that SSC outperforms BBC by about 0.35 dB (in terms of the required bit energy-to-noise spectral density for a bit error rate of 1,000) for an array of three DSN antennas, namely 64 m, 34m(T/R) and 34m(R).

  4. ArrayBridge: Interweaving declarative array processing with high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Haoyuan [The Ohio State Univ., Columbus, OH (United States); Floratos, Sofoklis [The Ohio State Univ., Columbus, OH (United States); Blanas, Spyros [The Ohio State Univ., Columbus, OH (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Prabhat [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Paul [Paradigm4, Inc., Waltham, MA (United States)

    2017-05-04

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.

  5. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  6. A Compute Environment of ABC95 Array Computer Based on Multi-FPGA Chip

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    ABC95 array computer is a multi-function network's computer based on FPGA technology, The multi-function network supports processors conflict-free access data from memory and supports processors access data from processors based on enhanced MESH network.ABC95 instruction's system includes control instructions, scalar instructions, vectors instructions.Mostly net-work instructions are introduced.A programming environment of ABC95 array computer assemble language is designed.A programming environment of ABC95 array computer for VC++ is advanced.It includes load function of ABC95 array computer program and data, store function, run function and so on.Specially, The data type of ABC95 array computer conflict-free access is defined.The results show that these technologies can develop programmer of ABC95 array computer effectively.

  7. Computationally efficient optimisation algorithms for WECs arrays

    DEFF Research Database (Denmark)

    Ferri, Francesco

    2017-01-01

    In this paper two derivative-free global optimization algorithms are applied for the maximisation of the energy absorbed by wave energy converter (WEC) arrays. Wave energy is a large and mostly untapped source of energy that could have a key role in the future energy mix. The collection of this r...

  8. Pilot-Streaming: Design Considerations for a Stream Processing Framework for High-Performance Computing

    OpenAIRE

    Andre Luckow; Peter Kasson; Shantenu Jha

    2016-01-01

    This White Paper (submitted to STREAM 2016) identifies an approach to integrate streaming data with HPC resources. The paper outlines the design of Pilot-Streaming, which extends the concept of Pilot-abstraction to streaming real-time data.

  9. Field computation for two-dimensional array transducers with limited diffraction array beams.

    Science.gov (United States)

    Lu, Jian-Yu; Cheng, Jiqi

    2005-10-01

    A method is developed for calculating fields produced with a two-dimensional (2D) array transducer. This method decomposes an arbitrary 2D aperture weighting function into a set of limited diffraction array beams. Using the analytical expressions of limited diffraction beams, arbitrary continuous wave (cw) or pulse wave (pw) fields of 2D arrays can be obtained with a simple superposition of these beams. In addition, this method can be simplified and applied to a 1D array transducer of a finite or infinite elevation height. For beams produced with axially symmetric aperture weighting functions, this method can be reduced to the Fourier-Bessel method studied previously where an annular array transducer can be used. The advantage of the method is that it is accurate and computationally efficient, especially in regions that are not far from the surface of the transducer (near field), where it is important for medical imaging. Both computer simulations and a synthetic array experiment are carried out to verify the method. Results (Bessel beam, focused Gaussian beam, X wave and asymmetric array beams) show that the method is accurate as compared to that using the Rayleigh-Sommerfeld diffraction formula and agrees well with the experiment.

  10. Assessment of arrays of in-stream tidal turbines in the Bay of Fundy.

    Science.gov (United States)

    Karsten, Richard; Swan, Amanda; Culina, Joel

    2013-02-28

    Theories of in-stream turbines are adapted to analyse the potential electricity generation and impact of turbine arrays deployed in Minas Passage, Bay of Fundy. Linear momentum actuator disc theory (LMADT) is combined with a theory that calculates the flux through the passage to determine both the turbine power and the impact of rows of turbine fences. For realistically small blockage ratios, the theory predicts that extracting 2000-2500 MW of turbine power will result in a reduction in the flow of less than 5 per cent. The theory also suggests that there is little reason to tune the turbines if the blockage ratio remains small. A turbine array model is derived that extends LMADT by using the velocity field from a numerical simulation of the flow through Minas Passage and modelling the turbine wakes. The model calculates the resulting speed of the flow through and around a turbine array, allowing for the sequential positioning of turbines in regions of strongest flow. The model estimates that over 2000 MW of power is possible with only a 2.5 per cent reduction in the flow. If turbines are restricted to depths less than 50 m, the potential power generation is reduced substantially, down to 300 MW. For large turbine arrays, the blockage ratios remain small and the turbines can produce maximum power with a drag coefficient equal to the Betz-limit value.

  11. Null stream analysis of Pulsar Timing Array data: localisation of resolvable gravitational wave sources

    Science.gov (United States)

    Goldstein, Janna; Veitch, John; Sesana, Alberto; Vecchio, Alberto

    2018-04-01

    Super-massive black hole binaries are expected to produce a gravitational wave (GW) signal in the nano-Hertz frequency band which may be detected by pulsar timing arrays (PTAs) in the coming years. The signal is composed of both stochastic and individually resolvable components. Here we develop a generic Bayesian method for the analysis of resolvable sources based on the construction of `null-streams' which cancel the part of the signal held in common for each pulsar (the Earth-term). For an array of N pulsars there are N - 2 independent null-streams that cancel the GW signal from a particular sky location. This method is applied to the localisation of quasi-circular binaries undergoing adiabatic inspiral. We carry out a systematic investigation of the scaling of the localisation accuracy with signal strength and number of pulsars in the PTA. Additionally, we find that source sky localisation with the International PTA data release one is vastly superior than what is achieved by its constituent regional PTAs.

  12. Fast algorithm for automatically computing Strahler stream order

    Science.gov (United States)

    Lanfear, Kenneth J.

    1990-01-01

    An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.

  13. Multicriteria Resource Brokering in Cloud Computing for Streaming Service

    Directory of Open Access Journals (Sweden)

    Chih-Lun Chou

    2015-01-01

    Full Text Available By leveraging cloud computing such as Infrastructure as a Service (IaaS, the outsourcing of computing resources used to support operations, including servers, storage, and networking components, is quite beneficial for various providers of Internet application. With this increasing trend, resource allocation that both assures QoS via Service Level Agreement (SLA and avoids overprovisioning in order to reduce cost becomes a crucial priority and challenge in the design and operation of complex service-based platforms such as streaming service. On the other hand, providers of IaaS also concern their profit performance and energy consumption while offering these virtualized resources. In this paper, considering both service-oriented and infrastructure-oriented criteria, we regard this resource allocation problem as Multicriteria Decision Making problem and propose an effective trade-off approach based on goal programming model. To validate its effectiveness, a cloud architecture for streaming application is addressed and extensive analysis is performed for related criteria. The results of numerical simulations show that the proposed approach strikes a balance between these conflicting criteria commendably and achieves high cost efficiency.

  14. SIGMA, a new language for interactive array-oriented computing

    International Nuclear Information System (INIS)

    Hagedorn, R.; Reinfelds, J.; Vandoni, C.; Hove, L. van.

    1978-01-01

    A description is given of the principles and the main facilities of SIGMA (System for Interactive Graphical Mathematical Applications), a programming language for scientific computing whose major characteristics are: automatic handling of multi-dimensional rectangular arrays as basic data units, interactive operation of the system, and graphical display facilities. After introducing the basic concepts and features of the language, it describes in some detail the methods and operators for the automatic handling of arrays and for their graphical display, the procedures for construction of programs by users, and other facilities of the system. The report is a new version of CERN 73-5. (Auth.)

  15. Experimental investigations of ablation stream interaction dynamics in tungsten wire arrays: Interpenetration, magnetic field advection, and ion deflection

    Energy Technology Data Exchange (ETDEWEB)

    Swadling, G. F.; Lebedev, S. V.; Hall, G. N.; Suzuki-Vidal, F.; Burdiak, G. C.; Pickworth, L.; De Grouchy, P.; Skidmore, J.; Khoory, E.; Suttle, L.; Bennett, M.; Hare, J. D.; Clayson, T.; Bland, S. N.; Smith, R. A.; Stuart, N. H.; Patankar, S.; Robinson, T. S. [Blackett Laboratory, Imperial College, London SW7 2BW (United Kingdom); Harvey-Thompson, A. J. [Sandia National Laboratories, PO Box 5800, Albuquerque, New Mexico 87185-1193 (United States); Rozmus, W. [Department of Physics, University of Alberta, Edmonton, Alberta T6G 2J1 (Canada); and others

    2016-05-15

    Experiments have been carried out to investigate the collisional dynamics of ablation streams produced by cylindrical wire array z-pinches. A combination of laser interferometric imaging, Thomson scattering, and Faraday rotation imaging has been used to make a range of measurements of the temporal evolution of various plasma and flow parameters. This paper presents a summary of previously published data, drawing together a range of different measurements in order to give an overview of the key results. The paper focuses mainly on the results of experiments with tungsten wire arrays. Early interferometric imaging measurements are reviewed, then more recent Thomson scattering measurements are discussed; these measurements provided the first direct evidence of ablation stream interpenetration in a wire array experiment. Combining the data from these experiments gives a view of the temporal evolution of the tungsten stream collisional dynamics. In the final part of the paper, we present new experimental measurements made using an imaging Faraday rotation diagnostic. These experiments investigated the structure of magnetic fields near the array axis directly; the presence of a magnetic field has previously been inferred based on Thomson scattering measurements of ion deflection near the array axis. Although the Thomson and Faraday measurements are not in full quantitative agreement, the Faraday data do qualitatively supports the conjecture that the observed deflections are induced by a static toroidal magnetic field, which has been advected to the array axis by the ablation streams. It is likely that detailed modeling will be needed in order to fully understand the dynamics observed in the experiment.

  16. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  17. Cluster Computing For Real Time Seismic Array Analysis.

    Science.gov (United States)

    Martini, M.; Giudicepietro, F.

    A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by

  18. Two Dimensional Array Based Overlay Network for Balancing Load of Peer-to-Peer Live Video Streaming

    Science.gov (United States)

    Faruq Ibn Ibrahimy, Abdullah; Rafiqul, Islam Md; Anwar, Farhat; Ibn Ibrahimy, Muhammad

    2013-12-01

    The live video data is streaming usually in a tree-based overlay network or in a mesh-based overlay network. In case of departure of a peer with additional upload bandwidth, the overlay network becomes very vulnerable to churn. In this paper, a two dimensional array-based overlay network is proposed for streaming the live video stream data. As there is always a peer or a live video streaming server to upload the live video stream data, so the overlay network is very stable and very robust to churn. Peers are placed according to their upload and download bandwidth, which enhances the balance of load and performance. The overlay network utilizes the additional upload bandwidth of peers to minimize chunk delivery delay and to maximize balance of load. The procedure, which is used for distributing the additional upload bandwidth of the peers, distributes the additional upload bandwidth to the heterogeneous strength peers in a fair treat distribution approach and to the homogeneous strength peers in a uniform distribution approach. The proposed overlay network has been simulated by Qualnet from Scalable Network Technologies and results are presented in this paper.

  19. Two Dimensional Array Based Overlay Network for Balancing Load of Peer-to-Peer Live Video Streaming

    International Nuclear Information System (INIS)

    Ibrahimy, Abdullah Faruq Ibn; Rafiqul, Islam Md; Anwar, Farhat; Ibrahimy, Muhammad Ibn

    2013-01-01

    The live video data is streaming usually in a tree-based overlay network or in a mesh-based overlay network. In case of departure of a peer with additional upload bandwidth, the overlay network becomes very vulnerable to churn. In this paper, a two dimensional array-based overlay network is proposed for streaming the live video stream data. As there is always a peer or a live video streaming server to upload the live video stream data, so the overlay network is very stable and very robust to churn. Peers are placed according to their upload and download bandwidth, which enhances the balance of load and performance. The overlay network utilizes the additional upload bandwidth of peers to minimize chunk delivery delay and to maximize balance of load. The procedure, which is used for distributing the additional upload bandwidth of the peers, distributes the additional upload bandwidth to the heterogeneous strength peers in a fair treat distribution approach and to the homogeneous strength peers in a uniform distribution approach. The proposed overlay network has been simulated by Qualnet from Scalable Network Technologies and results are presented in this paper

  20. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  1. FORTRAN computer programs to process Savannah River Laboratory hydrogeochemical and stream-sediment reconnaissance data

    International Nuclear Information System (INIS)

    Zinkl, R.J.; Shettel, D.L. Jr.; D'Andrea, R.F. Jr.

    1980-03-01

    FORTRAN computer programs have been written to read, edit, and reformat the hydrogeochemical and stream-sediment reconnaissance data produced by Savannah River Laboratory for the National Uranium Resource Evaluation program. The data are presorted by Savannah River Laboratory into stream sediment, ground water, and stream water for each 1 0 x 2 0 quadrangle. Extraneous information is eliminated, and missing analyses are assigned a specific value (-99999.0). Negative analyses are below the detection limit; the absolute value of a negative analysis is assumed to be the detection limit

  2. A MULTICORE COMPUTER SYSTEM FOR DESIGN OF STREAM CIPHERS BASED ON RANDOM FEEDBACK

    Directory of Open Access Journals (Sweden)

    Borislav BEDZHEV

    2013-01-01

    Full Text Available The stream ciphers are an important tool for providing information security in the present communication and computer networks. Due to this reason our paper describes a multicore computer system for design of stream ciphers based on the so - named random feedback shift registers (RFSRs. The interest to this theme is inspired by the following facts. First, the RFSRs are a relatively new type of stream ciphers which demonstrate a significant enhancement of the crypto - resistance in a comparison with the classical stream ciphers. Second, the studding of the features of the RFSRs is in very initial stage. Third, the theory of the RFSRs seems to be very hard, which leads to the necessity RFSRs to be explored mainly by the means of computer models. The paper is organized as follows. First, the basics of the RFSRs are recalled. After that, our multicore computer system for design of stream ciphers based on RFSRs is presented. Finally, the advantages and possible areas of application of the computer system are discussed.

  3. STREAM

    DEFF Research Database (Denmark)

    Godsk, Mikkel

    This paper presents a flexible model, ‘STREAM’, for transforming higher science education into blended and online learning. The model is inspired by ideas of active and collaborative learning and builds on feedback strategies well-known from Just-in-Time Teaching, Flipped Classroom, and Peer...... Instruction. The aim of the model is to provide both a concrete and comprehensible design toolkit for adopting and implementing educational technologies in higher science teaching practice and at the same time comply with diverse ambitions. As opposed to the above-mentioned feedback strategies, the STREAM...... model supports a relatively diverse use of educational technologies and may also be used to transform teaching into completely online learning. So far both teachers and educational developers have positively received the model and the initial design experiences show promise....

  4. A fast computation method for MUSIC spectrum function based on circular arrays

    Science.gov (United States)

    Du, Zhengdong; Wei, Ping

    2015-02-01

    The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.

  5. Computer-aided engineering system for design of sequence arrays and lithographic masks

    Science.gov (United States)

    Hubbell, Earl A.; Morris, MacDonald S.; Winkler, James L.

    1996-01-01

    An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system (100) is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files (104) to design and/or generate lithographic masks (110).

  6. Efficient Processing of Continuous Skyline Query over Smarter Traffic Data Stream for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wang Hanning

    2013-01-01

    Full Text Available The analyzing and processing of multisource real-time transportation data stream lay a foundation for the smart transportation's sensibility, interconnection, integration, and real-time decision making. Strong computing ability and valid mass data management mode provided by the cloud computing, is feasible for handling Skyline continuous query in the mass distributed uncertain transportation data stream. In this paper, we gave architecture of layered smart transportation about data processing, and we formalized the description about continuous query over smart transportation data Skyline. Besides, we proposed mMR-SUDS algorithm (Skyline query algorithm of uncertain transportation stream data based on micro-batchinMap Reduce based on sliding window division and architecture.

  7. Efficient Buffer Capacity and Scheduler Setting Computation for Soft Real-Time Stream Processing Applications

    NARCIS (Netherlands)

    Bekooij, Marco; Bekooij, Marco Jan Gerrit; Wiggers, M.H.; van Meerbergen, Jef

    2007-01-01

    Soft real-time applications that process data streams can often be intuitively described as dataflow process networks. In this paper we present a novel analysis technique to compute conservative estimates of the required buffer capacities in such process networks. With the same analysis technique

  8. Green computing: efficient energy management of multiprocessor streaming applications via model checking

    NARCIS (Netherlands)

    Ahmad, W.

    2017-01-01

    Streaming applications such as virtual reality, video conferencing, and face detection, impose high demands on a system’s performance and battery life. With the advancement in mobile computing, these applications are increasingly implemented on battery-constrained platforms, such as gaming consoles,

  9. Stream computing for biomedical signal processing: A QRS complex detection case-study.

    Science.gov (United States)

    Murphy, B M; O'Driscoll, C; Boylan, G B; Lightbody, G; Marnane, W P

    2015-01-01

    Recent developments in "Big Data" have brought significant gains in the ability to process large amounts of data on commodity server hardware. Stream computing is a relatively new paradigm in this area, addressing the need to process data in real time with very low latency. While this approach has been developed for dealing with large scale data from the world of business, security and finance, there is a natural overlap with clinical needs for physiological signal processing. In this work we present a case study of streams processing applied to a typical physiological signal processing problem: QRS detection from ECG data.

  10. The Ocean Observatories Initiative: Unprecedented access to real-time data streaming from the Cabled Array through OOI Cyberinfrastructure

    Science.gov (United States)

    Knuth, F.; Vardaro, M.; Belabbassi, L.; Smith, M. J.; Garzio, L. M.; Crowley, M. F.; Kerfoot, J.; Kawka, O. E.

    2016-02-01

    The National Science Foundation's Ocean Observatories Initiative (OOI), is a broad-scale, multidisciplinary facility that will transform oceanographic research by providing users with unprecedented access to long-term datasets from a variety of deployed physical, chemical, biological, and geological sensors. The Cabled Array component of the OOI, installed and operated by the University of Washington, is located on the Juan de Fuca tectonic plate off the coast of Oregon. It is a unique network of >100 cabled instruments and instrumented moorings transmitting data to shore in real-time via fiber optic technology. Instruments now installed include HD video and digital still cameras, mass spectrometers, a resistivity-temperature probe inside the orifice of a high-temperature hydrothermal vent, upward-looking ADCP's, pH and pC02 sensors, Horizontal Electrometer Pressure Inverted Echosounders and many others. Here, we present the technical aspects of data streaming from the Cabled Array through the OOI Cyberinfrastructure. We illustrate the types of instruments and data products available, data volume and density, processing levels and algorithms used, data delivery methods, file formats and access methods through the graphical user interface. Our goal is to facilitate the use and access to these unprecedented, co-registered oceanographic datasets. We encourage researchers to collaborate through the use of these simultaneous, interdisciplinary measurements, in the exploration of short-lived events (tectonic, volcanic, biological, severe storms), as well as long-term trends in ocean systems (circulation patterns, climate change, ocean acidity, ecosystem shifts).

  11. Autonomic intrusion detection: Adaptively detecting anomalies over unlabeled audit data streams in computer networks

    KAUST Repository

    Wang, Wei; Guyet, Thomas; Quiniou, René ; Cordier, Marie-Odile; Masseglia, Florent; Zhang, Xiangliang

    2014-01-01

    In this work, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-managing: self-labeling, self-updating and self-adapting. Our framework employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies. Two large real HTTP traffic streams collected in our institute as well as a set of benchmark KDD’99 data are used to validate the framework and the method. The test results show that the autonomic model achieves better results in terms of effectiveness and efficiency compared to adaptive Sequential Karhunen–Loeve method and static AP as well as three other static anomaly detection methods, namely, k-NN, PCA and SVM.

  12. Autonomic intrusion detection: Adaptively detecting anomalies over unlabeled audit data streams in computer networks

    KAUST Repository

    Wang, Wei

    2014-06-22

    In this work, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-managing: self-labeling, self-updating and self-adapting. Our framework employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies. Two large real HTTP traffic streams collected in our institute as well as a set of benchmark KDD’99 data are used to validate the framework and the method. The test results show that the autonomic model achieves better results in terms of effectiveness and efficiency compared to adaptive Sequential Karhunen–Loeve method and static AP as well as three other static anomaly detection methods, namely, k-NN, PCA and SVM.

  13. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    Science.gov (United States)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  14. Electrostatic mechanism of nucleosomal array folding revealed by computer simulation.

    Science.gov (United States)

    Sun, Jian; Zhang, Qing; Schlick, Tamar

    2005-06-07

    Although numerous experiments indicate that the chromatin fiber displays salt-dependent conformations, the associated molecular mechanism remains unclear. Here, we apply an irregular Discrete Surface Charge Optimization (DiSCO) model of the nucleosome with all histone tails incorporated to describe by Monte Carlo simulations salt-dependent rearrangements of a nucleosomal array with 12 nucleosomes. The ensemble of nucleosomal array conformations display salt-dependent condensation in good agreement with hydrodynamic measurements and suggest that the array adopts highly irregular 3D zig-zag conformations at high (physiological) salt concentrations and transitions into the extended "beads-on-a-string" conformation at low salt. Energy analyses indicate that the repulsion among linker DNA leads to this extended form, whereas internucleosome attraction drives the folding at high salt. The balance between these two contributions determines the salt-dependent condensation. Importantly, the internucleosome and linker DNA-nucleosome attractions require histone tails; we find that the H3 tails, in particular, are crucial for stabilizing the moderately folded fiber at physiological monovalent salt.

  15. Trends in Computer-Aided Manufacturing in Prosthodontics: A Review of the Available Streams

    Science.gov (United States)

    Bennamoun, Mohammed

    2014-01-01

    In prosthodontics, conventional methods of fabrication of oral and facial prostheses have been considered the gold standard for many years. The development of computer-aided manufacturing and the medical application of this industrial technology have provided an alternative way of fabricating oral and facial prostheses. This narrative review aims to evaluate the different streams of computer-aided manufacturing in prosthodontics. To date, there are two streams: the subtractive and the additive approaches. The differences reside in the processing protocols, materials used, and their respective accuracy. In general, there is a tendency for the subtractive method to provide more homogeneous objects with acceptable accuracy that may be more suitable for the production of intraoral prostheses where high occlusal forces are anticipated. Additive manufacturing methods have the ability to produce large workpieces with significant surface variation and competitive accuracy. Such advantages make them ideal for the fabrication of facial prostheses. PMID:24817888

  16. Trends in Computer-Aided Manufacturing in Prosthodontics: A Review of the Available Streams

    Directory of Open Access Journals (Sweden)

    Jaafar Abduo

    2014-01-01

    Full Text Available In prosthodontics, conventional methods of fabrication of oral and facial prostheses have been considered the gold standard for many years. The development of computer-aided manufacturing and the medical application of this industrial technology have provided an alternative way of fabricating oral and facial prostheses. This narrative review aims to evaluate the different streams of computer-aided manufacturing in prosthodontics. To date, there are two streams: the subtractive and the additive approaches. The differences reside in the processing protocols, materials used, and their respective accuracy. In general, there is a tendency for the subtractive method to provide more homogeneous objects with acceptable accuracy that may be more suitable for the production of intraoral prostheses where high occlusal forces are anticipated. Additive manufacturing methods have the ability to produce large workpieces with significant surface variation and competitive accuracy. Such advantages make them ideal for the fabrication of facial prostheses.

  17. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    Science.gov (United States)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  18. Bringing Legacy Visualization Software to Modern Computing Devices via Application Streaming

    Science.gov (United States)

    Fisher, Ward

    2014-05-01

    Planning software compatibility across forthcoming generations of computing platforms is a problem commonly encountered in software engineering and development. While this problem can affect any class of software, data analysis and visualization programs are particularly vulnerable. This is due in part to their inherent dependency on specialized hardware and computing environments. A number of strategies and tools have been designed to aid software engineers with this task. While generally embraced by developers at 'traditional' software companies, these methodologies are often dismissed by the scientific software community as unwieldy, inefficient and unnecessary. As a result, many important and storied scientific software packages can struggle to adapt to a new computing environment; for example, one in which much work is carried out on sub-laptop devices (such as tablets and smartphones). Rewriting these packages for a new platform often requires significant investment in terms of development time and developer expertise. In many cases, porting older software to modern devices is neither practical nor possible. As a result, replacement software must be developed from scratch, wasting resources better spent on other projects. Enabled largely by the rapid rise and adoption of cloud computing platforms, 'Application Streaming' technologies allow legacy visualization and analysis software to be operated wholly from a client device (be it laptop, tablet or smartphone) while retaining full functionality and interactivity. It mitigates much of the developer effort required by other more traditional methods while simultaneously reducing the time it takes to bring the software to a new platform. This work will provide an overview of Application Streaming and how it compares against other technologies which allow scientific visualization software to be executed from a remote computer. We will discuss the functionality and limitations of existing application streaming

  19. Computational analysis of vertical axis wind turbine arrays

    Science.gov (United States)

    Bremseth, J.; Duraisamy, K.

    2016-10-01

    Canonical problems involving single, pairs, and arrays of vertical axis wind turbines (VAWTs) are investigated numerically with the objective of understanding the underlying flow structures and their implications on energy production. Experimental studies by Dabiri (J Renew Sustain Energy 3, 2011) suggest that VAWTs demand less stringent spacing requirements than their horizontal axis counterparts and additional benefits may be obtained by optimizing the placement and rotational direction of VAWTs. The flowfield of pairs of co-/counter-rotating VAWTs shows some similarities with pairs of cylinders in terms of wake structure and vortex shedding. When multiple VAWTs are placed in a column, the extent of the wake is seen to spread further downstream, irrespective of the direction of rotation of individual turbines. However, the aerodynamic interference between turbines gives rise to regions of excess momentum between the turbines which lead to significant power augmentations. Studies of VAWTs arranged in multiple columns show that the downstream columns can actually be more efficient than the leading column, a proposition that could lead to radical improvements in wind farm productivity.

  20. Identifying the impact of G-quadruplexes on Affymetrix 3' arrays using cloud computing.

    Science.gov (United States)

    Memon, Farhat N; Owen, Anne M; Sanchez-Graillet, Olivia; Upton, Graham J G; Harrison, Andrew P

    2010-01-15

    A tetramer quadruplex structure is formed by four parallel strands of DNA/ RNA containing runs of guanine. These quadruplexes are able to form because guanine can Hoogsteen hydrogen bond to other guanines, and a tetrad of guanines can form a stable arrangement. Recently we have discovered that probes on Affymetrix GeneChips that contain runs of guanine do not measure gene expression reliably. We associate this finding with the likelihood that quadruplexes are forming on the surface of GeneChips. In order to cope with the rapidly expanding size of GeneChip array datasets in the public domain, we are exploring the use of cloud computing to replicate our experiments on 3' arrays to look at the effect of the location of G-spots (runs of guanines). Cloud computing is a recently introduced high-performance solution that takes advantage of the computational infrastructure of large organisations such as Amazon and Google. We expect that cloud computing will become widely adopted because it enables bioinformaticians to avoid capital expenditure on expensive computing resources and to only pay a cloud computing provider for what is used. Moreover, as well as financial efficiency, cloud computing is an ecologically-friendly technology, it enables efficient data-sharing and we expect it to be faster for development purposes. Here we propose the advantageous use of cloud computing to perform a large data-mining analysis of public domain 3' arrays.

  1. Use of computer programs STLK1 and STWT1 for analysis of stream-aquifer hydraulic interaction

    Science.gov (United States)

    Desimone, Leslie A.; Barlow, Paul M.

    1999-01-01

    Quantifying the hydraulic interaction of aquifers and streams is important in the analysis of stream base fow, flood-wave effects, and contaminant transport between surface- and ground-water systems. This report describes the use of two computer programs, STLK1 and STWT1, to analyze the hydraulic interaction of streams with confined, leaky, and water-table aquifers during periods of stream-stage fuctuations and uniform, areal recharge. The computer programs are based on analytical solutions to the ground-water-flow equation in stream-aquifer settings and calculate ground-water levels, seepage rates across the stream-aquifer boundary, and bank storage that result from arbitrarily varying stream stage or recharge. Analysis of idealized, hypothetical stream-aquifer systems is used to show how aquifer type, aquifer boundaries, and aquifer and streambank hydraulic properties affect aquifer response to stresses. Published data from alluvial and stratifed-drift aquifers in Kentucky, Massachusetts, and Iowa are used to demonstrate application of the programs to field settings. Analytical models of these three stream-aquifer systems are developed on the basis of available hydrogeologic information. Stream-stage fluctuations and recharge are applied to the systems as hydraulic stresses. The models are calibrated by matching ground-water levels calculated with computer program STLK1 or STWT1 to measured ground-water levels. The analytical models are used to estimate hydraulic properties of the aquifer, aquitard, and streambank; to evaluate hydrologic conditions in the aquifer; and to estimate seepage rates and bank-storage volumes resulting from flood waves and recharge. Analysis of field examples demonstrates the accuracy and limitations of the analytical solutions and programs when applied to actual ground-water systems and the potential uses of the analytical methods as alternatives to numerical modeling for quantifying stream-aquifer interactions.

  2. Comparison of Computational and Experimental Microphone Array Results for an 18%-Scale Aircraft Model

    Science.gov (United States)

    Lockard, David P.; Humphreys, William M.; Khorrami, Mehdi R.; Fares, Ehab; Casalino, Damiano; Ravetta, Patricio A.

    2015-01-01

    An 18%-scale, semi-span model is used as a platform for examining the efficacy of microphone array processing using synthetic data from numerical simulations. Two hybrid RANS/LES codes coupled with Ffowcs Williams-Hawkings solvers are used to calculate 97 microphone signals at the locations of an array employed in the NASA LaRC 14x22 tunnel. Conventional, DAMAS, and CLEAN-SC array processing is applied in an identical fashion to the experimental and computational results for three different configurations involving deploying and retracting the main landing gear and a part span flap. Despite the short time records of the numerical signals, the beamform maps are able to isolate the noise sources, and the appearance of the DAMAS synthetic array maps is generally better than those from the experimental data. The experimental CLEAN-SC maps are similar in quality to those from the simulations indicating that CLEAN-SC may have less sensitivity to background noise. The spectrum obtained from DAMAS processing of synthetic array data is nearly identical to the spectrum of the center microphone of the array, indicating that for this problem array processing of synthetic data does not improve spectral comparisons with experiment. However, the beamform maps do provide an additional means of comparison that can reveal differences that cannot be ascertained from spectra alone.

  3. Prospects for quantum computing with an array of ultracold polar paramagnetic molecules.

    Science.gov (United States)

    Karra, Mallikarjun; Sharma, Ketan; Friedrich, Bretislav; Kais, Sabre; Herschbach, Dudley

    2016-03-07

    Arrays of trapped ultracold molecules represent a promising platform for implementing a universal quantum computer. DeMille [Phys. Rev. Lett. 88, 067901 (2002)] has detailed a prototype design based on Stark states of polar (1)Σ molecules as qubits. Herein, we consider an array of polar (2)Σ molecules which are, in addition, inherently paramagnetic and whose Hund's case (b) free-rotor pair-eigenstates are Bell states. We show that by subjecting the array to combinations of concurrent homogeneous and inhomogeneous electric and magnetic fields, the entanglement of the array's Stark and Zeeman states can be tuned and the qubit sites addressed. Two schemes for implementing an optically controlled CNOT gate are proposed and their feasibility discussed in the face of the broadening of spectral lines due to dipole-dipole coupling and the inhomogeneity of the electric and magnetic fields.

  4. Nanoscale phosphorus atom arrays created using STM for the fabrication of a silicon based quantum computer.

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, J. L. (Jeremy L.); Schofield, S. R. (Steven R.); Simmons, M. Y. (Michelle Y.); Clark, R. G. (Robert G.); Dzurak, A. S. (Andrew S.); Curson, N. J. (Neil J.); Kane, B. E. (Bruce E.); McAlpine, N. S. (Neal S.); Hawley, M. E. (Marilyn E.); Brown, G. W. (Geoffrey W.)

    2001-01-01

    Quantum computers offer the promise of formidable computational power for certain tasks. Of the various possible physical implementations of such a device, silicon based architectures are attractive for their scalability and ease of integration with existing silicon technology. These designs use either the electron or nuclear spin state of single donor atoms to store quantum information. Here we describe a strategy to fabricate an array of single phosphorus atoms in silicon for the construction of such a silicon based quantum computer. We demonstrate the controlled placement of single phosphorus bearing molecules on a silicon surface. This has been achieved by patterning a hydrogen mono-layer 'resist' with a scanning tunneling microscope (STM) tip and exposing the patterned surface to phosphine (PH3) molecules. We also describe preliminary studies into a process to incorporate these surface phosphorus atoms into the silicon crystal at the array sites. Keywords: Quantum computing, nanotechriology scanning turincling microscopy, hydrogen lithography

  5. An end-to-end computing model for the Square Kilometre Array

    NARCIS (Netherlands)

    Jongerius, R.; Wijnholds, S.; Nijboer, R.; Corporaal, H.

    2014-01-01

    For next-generation radio telescopes such as the Square Kilometre Array, seemingly minor changes in scientific constraints can easily push computing requirements into the exascale domain. The authors propose a model for engineers and astronomers to understand these relations and make tradeoffs in

  6. Scanning tunnelling microscope fabrication of phosphorus array in silicon for a nuclear spin quantum computer

    International Nuclear Information System (INIS)

    O'Brien, J.L.; Schofield, S.R.; Simmons, M.Y.; Clark, R.G.; Dzurak, A.S.; Prawer, S.; Adrienko, I.; Cimino, A.

    2000-01-01

    Full text: In the vigorous worldwide effort to experimentally build a quantum computer, recent intense interest has focussed on solid state approaches for their promise of scalability. Particular attention has been given to silicon-based proposals that can readily be integrated into conventional computing technology. For example the Kane design uses the well isolated nuclear spin of phosphorous donor nuclei (I=1/2) as the qubits embedded in isotopically pure 28 Si (I=0). We demonstrate the ability to fabricate a precise array of P atoms on a clean Si surface with atomic-scale resolution compatible with the fabrication of the Kane quantum computer

  7. Seismic array processing and computational infrastructure for improved monitoring of Alaskan and Aleutian seismicity and volcanoes

    Science.gov (United States)

    Lindquist, Kent Gordon

    We constructed a near-real-time system, called Iceworm, to automate seismic data collection, processing, storage, and distribution at the Alaska Earthquake Information Center (AEIC). Phase-picking, phase association, and interprocess communication components come from Earthworm (U.S. Geological Survey). A new generic, internal format for digital data supports unified handling of data from diverse sources. A new infrastructure for applying processing algorithms to near-real-time data streams supports automated information extraction from seismic wavefields. Integration of Datascope (U. of Colorado) provides relational database management of all automated measurements, parametric information for located hypocenters, and waveform data from Iceworm. Data from 1997 yield 329 earthquakes located by both Iceworm and the AEIC. Of these, 203 have location residuals under 22 km, sufficient for hazard response. Regionalized inversions for local magnitude in Alaska yield Msb{L} calibration curves (logAsb0) that differ from the Californian Richter magnitude. The new curve is 0.2\\ Msb{L} units more attenuative than the Californian curve at 400 km for earthquakes north of the Denali fault. South of the fault, and for a region north of Cook Inlet, the difference is 0.4\\ Msb{L}. A curve for deep events differs by 0.6\\ Msb{L} at 650 km. We expand geographic coverage of Alaskan regional seismic monitoring to the Aleutians, the Bering Sea, and the entire Arctic by initiating the processing of four short-period, Alaskan seismic arrays. To show the array stations' sensitivity, we detect and locate two microearthquakes that were missed by the AEIC. An empirical study of the location sensitivity of the arrays predicts improvements over the Alaskan regional network that are shown as map-view contour plots. We verify these predictions by detecting an Msb{L} 3.2 event near Unimak Island with one array. The detection and location of four representative earthquakes illustrates the expansion

  8. Terahertz computed tomography in three-dimensional using a pyroelectric array detector

    Science.gov (United States)

    Li, Bin; Wang, Dayong; Zhou, Xun; Rong, Lu; Huang, Haochong; Wan, Min; Wang, Yunxin

    2017-05-01

    Terahertz frequency range spans from 0.1 to 10 THz. Terahertz radiation can penetrate nonpolar materials and nonmetallic materials, such as plastics, wood, and clothes. Then the feature makes the terahertz imaging have important research value. Terahertz computed tomography makes use of the penetrability of terahertz radiation and obtains three-dimensional object projection data. In the paper, continuous-wave terahertz computed tomography with a pyroelectric array detectoris presented. Compared with scanning terahertz computed tomography, a pyroelectric array detector can obtain a large number of projection data in a short time, as the acquisition mode of the array pyroelectric detector omit the projection process on the vertical and horizontal direction. With the two-dimensional cross-sectional images of the object are obtained by the filtered back projection algorithm. The two side distance of the straw wall account for 80 pixels, so it multiplied by the pixel size is equal to the diameter of the straw about 6.4 mm. Compared with the actual diameter of the straw, the relative error is 6%. In order to reconstruct the three-dimensional internal structure image of the straw, the y direction range from 70 to 150 are selected on the array pyroelectric detector and are reconstructed by the filtered back projection algorithm. As the pixel size is 80 μm, the height of three-dimensional internal structure image of the straw is 6.48 mm. The presented system can rapidly reconstruct the three-dimensional object by using a pyroelectric array detector and explores the feasibility of on non-destructive evaluation and security testing.

  9. Benthic invertebrate fauna, small streams

    Science.gov (United States)

    J. Bruce Wallace; S.L. Eggert

    2009-01-01

    Small streams (first- through third-order streams) make up >98% of the total number of stream segments and >86% of stream length in many drainage networks. Small streams occur over a wide array of climates, geology, and biomes, which influence temperature, hydrologic regimes, water chemistry, light, substrate, stream permanence, a basin's terrestrial plant...

  10. Reduced-Complexity Direction of Arrival Estimation Using Real-Valued Computation with Arbitrary Array Configurations

    Directory of Open Access Journals (Sweden)

    Feng-Gang Yan

    2018-01-01

    Full Text Available A low-complexity algorithm is presented to dramatically reduce the complexity of the multiple signal classification (MUSIC algorithm for direction of arrival (DOA estimation, in which both tasks of eigenvalue decomposition (EVD and spectral search are implemented with efficient real-valued computations, leading to about 75% complexity reduction as compared to the standard MUSIC. Furthermore, the proposed technique has no dependence on array configurations and is hence suitable for arbitrary array geometries, which shows a significant implementation advantage over most state-of-the-art unitary estimators including unitary MUSIC (U-MUSIC. Numerical simulations over a wide range of scenarios are conducted to show the performance of the new technique, which demonstrates that with a significantly reduced computational complexity, the new approach is able to provide a close accuracy to the standard MUSIC.

  11. On a computational study for investigating acoustic streaming and heating during focused ultrasound ablation of liver tumor

    International Nuclear Information System (INIS)

    Solovchuk, Maxim A.; Sheu, Tony W.H.; Thiriet, Marc; Lin, Win-Li

    2013-01-01

    The influences of blood vessels and focused location on temperature distribution during high-intensity focused ultrasound (HIFU) ablation of liver tumors are studied numerically. A three-dimensional acoustics-thermal-fluid coupling model is employed to compute the temperature field in the hepatic cancerous region. The model construction is based on the linear Westervelt and bioheat equations as well as the nonlinear Navier–Stokes equations for the liver parenchyma and blood vessels. The effect of acoustic streaming is also taken into account in the present HIFU simulation study. Different blood vessel diameters and focal point locations were investigated. We found from this three-dimensional numerical study that in large blood vessels both the convective cooling and acoustic streaming can considerably change the temperature field and the thermal lesion near blood vessels. If the blood vessel is located within the beam width, both acoustic streaming and blood flow cooling effects should be addressed. The temperature rise on the blood vessel wall generated by a 1.0 MHz focused ultrasound transducer with the focal intensity 327 W/cm 2 was 54% lower when acoustic streaming effect was taken into account. Subject to the applied acoustic power the streaming velocity in a 3 mm blood vessel is 12 cm/s. Thirty percent of the necrosed volume can be reduced, when taking into account the acoustic streaming effect. -- Highlights: • 3D three-field coupling physical model for focused ultrasound tumor ablation is presented. • Acoustic streaming and blood flow cooling effects on ultrasound heating are investigated. • Acoustic streaming can considerably affect the temperature distribution. • The lesion can be reduced by 30% due to the acoustic streaming effect. • Temperature on the blood vessel wall is reduced by 54% due to the acoustic streaming effect

  12. Computer program SCAP-BR for gamma-ray streaming through multi-legged ducts

    International Nuclear Information System (INIS)

    Byoun, T.Y.; Babel, P.J.; Dajani, A.T.

    1977-01-01

    A computer program, SCAP-BR, has been developed at Burns and Roe for the gamma-ray streaming analysis through multi-legged ducts. SCAP-BR is a modified version of the single scattering code, SCAP, incorporating capabilities of handling multiple scattering and volumetric source geometries. It utilizes the point kernel integration method to calculate both the line-of-sight and scattered gamma dose rates by employing the ray tracing technique through complex shield geometries. The multiple scattering is handled by a repeated process of the single scatter method through each successive scatter region and collapsed pseudo source meshes constructed on the relative coordinate systems. The SCAP-BR results have been compared with experimental data for a Z-type (three-legged) concrete duct with a Co-60 source placed at the duct entrance point. The SCAP-BR dose rate predictions along the duct axis demonstrate an excellent agreement with the measured values

  13. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy

    International Nuclear Information System (INIS)

    Kim, Joshua; Zhang, Tiezhi; Lu, Weiguo

    2014-01-01

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source–dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10–15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source–dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented. (paper)

  14. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy

    Science.gov (United States)

    Kim, Joshua; Lu, Weiguo; Zhang, Tiezhi

    2014-02-01

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source-dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10-15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source-dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented.

  15. Isotropic-resolution linear-array-based photoacoustic computed tomography through inverse Radon transform

    Science.gov (United States)

    Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.

    2015-03-01

    Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.

  16. Omniscopes: Large area telescope arrays with only NlogN computational cost

    International Nuclear Information System (INIS)

    Tegmark, Max; Zaldarriaga, Matias

    2010-01-01

    We show that the class of antenna layouts for telescope arrays allowing cheap analysis hardware (with correlator cost scaling as NlogN rather than N 2 with the number of antennas N) is encouragingly large, including not only previously discussed rectangular grids but also arbitrary hierarchies of such grids, with arbitrary rotations and shears at each level. We show that all correlations for such a 2D array with an n-level hierarchy can be efficiently computed via a fast Fourier transform in not two but 2n dimensions. This can allow major correlator cost reductions for science applications requiring exquisite sensitivity at widely separated angular scales, for example, 21 cm tomography (where short baselines are needed to probe the cosmological signal and long baselines are needed for point source removal), helping enable future 21 cm experiments with thousands or millions of cheap dipolelike antennas. Such hierarchical grids combine the angular resolution advantage of traditional array layouts with the cost advantage of a rectangular fast Fourier transform telescope. We also describe an algorithm for how a subclass of hierarchical arrays can efficiently use rotation synthesis to produce global sky maps with minimal noise and a well-characterized synthesized beam.

  17. Computationally Efficient 2D DOA Estimation for L-Shaped Array with Unknown Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Yang-Yang Dong

    2018-01-01

    Full Text Available Although L-shaped array can provide good angle estimation performance and is easy to implement, its two-dimensional (2D direction-of-arrival (DOA performance degrades greatly in the presence of mutual coupling. To deal with the mutual coupling effect, a novel 2D DOA estimation method for L-shaped array with low computational complexity is developed in this paper. First, we generalize the conventional mutual coupling model for L-shaped array and compensate the mutual coupling blindly via sacrificing a few sensors as auxiliary elements. Then we apply the propagator method twice to mitigate the effect of strong source signal correlation effect. Finally, the estimations of azimuth and elevation angles are achieved simultaneously without pair matching via the complex eigenvalue technique. Compared with the existing methods, the proposed method is computationally efficient without spectrum search or polynomial rooting and also has fine angle estimation performance for highly correlated source signals. Theoretical analysis and simulation results have demonstrated the effectiveness of the proposed method.

  18. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  19. PREVENTIVE SIGNATURE MODEL FOR SECURE CLOUD DEPLOYMENT THROUGH FUZZY DATA ARRAY COMPUTATION

    Directory of Open Access Journals (Sweden)

    R. Poorvadevi

    2017-01-01

    Full Text Available Cloud computing is a resource pool which offers boundless services by the form of resources to its end users whoever heavily depends on cloud service providers. Cloud is providing the service access across the geographic locations in an efficient way. However it is offering numerous services, client end system is not having adequate methods, security policies and other protocols for using the cloud customer secret level transactions and other privacy related information. So, this proposed model brings the solution for securing the cloud user confidential data, Application deployment and also identifying the genuineness of the user by applying the scheme which is referred as fuzzy data array computation. Fuzzy data array computation provides an effective system is called signature retrieval and evaluation system through which customer’s data can be safeguarded along with their application. This signature system can be implemented on the cloud environment using the cloud sim 3.0 simulator tools. It facilitates the security operation over the data centre and cloud vendor locations in an effective manner.

  20. Field programmable gate array-assigned complex-valued computation and its limits

    Energy Technology Data Exchange (ETDEWEB)

    Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria); Zwick, Wolfgang; Klier, Jochen [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Wenzel, Lothar [National Instruments, 11500 N MOPac Expy, Austin, Texas 78759 (United States); Gröschl, Martin [Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria)

    2014-09-15

    We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.

  1. The Square Kilometre Array Science Data Processor. Preliminary compute platform design

    International Nuclear Information System (INIS)

    Broekema, P.C.; Nieuwpoort, R.V. van; Bal, H.E.

    2015-01-01

    The Square Kilometre Array is a next-generation radio-telescope, to be built in South Africa and Western Australia. It is currently in its detailed design phase, with procurement and construction scheduled to start in 2017. The SKA Science Data Processor is the high-performance computing element of the instrument, responsible for producing science-ready data. This is a major IT project, with the Science Data Processor expected to challenge the computing state-of-the art even in 2020. In this paper we introduce the preliminary Science Data Processor design and the principles that guide the design process, as well as the constraints to the design. We introduce a highly scalable and flexible system architecture capable of handling the SDP workload

  2. Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle

    Directory of Open Access Journals (Sweden)

    Junpeng Shi

    2017-02-01

    Full Text Available In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS method for two-dimensional direction of arrival (2D DOA estimation with uniform rectangular arrays (URAs in a low-grazing angle (LGA condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.

  3. A Statistical Model and Computer program for Preliminary Calculations Related to the Scaling of Sensor Arrays; TOPICAL

    International Nuclear Information System (INIS)

    Max Morris

    2001-01-01

    Recent advances in sensor technology and engineering have made it possible to assemble many related sensors in a common array, often of small physical size. Sensor arrays may report an entire vector of measured values in each data collection cycle, typically one value per sensor per sampling time. The larger quantities of data provided by larger arrays certainly contain more information, however in some cases experience suggests that dramatic increases in array size do not always lead to corresponding improvements in the practical value of the data. The work leading to this report was motivated by the need to develop computational planning tools to approximate the relative effectiveness of arrays of different size (or scale) in a wide variety of contexts. The basis of the work is a statistical model of a generic sensor array. It includes features representing measurement error, both common to all sensors and independent from sensor to sensor, and the stochastic relationships between the quantities to be measured by the sensors. The model can be used to assess the effectiveness of hypothetical arrays in classifying objects or events from two classes. A computer program is presented for evaluating the misclassification rates which can be expected when arrays are calibrated using a given number of training samples, or the number of training samples required to attain a given level of classification accuracy. The program is also available via email from the first author for a limited time

  4. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N J; Schölkopf, B

    2012-01-01

    We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135

  5. Computing Diameter in the Streaming and Sliding-Window Models (Preprint)

    National Research Council Canada - National Science Library

    Feigenbaum, Joan; Kannan, Sampath; Zhang, Jian

    2002-01-01

    We investigate the diameter problem in the streaming and sliding-window models. We show that, for a stream of n points or a sliding window of size n, any exact algorithm for diameter requires Omega(n) bits of space...

  6. Computer-aided mapping of stream channels beneath the Lawrence Livermore National Laboratory Super Fund Site

    Energy Technology Data Exchange (ETDEWEB)

    Sick, M. [Lawrence Livermore National Lab., CA (United States)

    1994-12-01

    The Lawrence Livermore National Laboratory (LLNL) site rests upon 300-400 feet of highly heterogeneous braided stream sediments which have been contaminated by a plume of Volatile Organic Compounds (VOCs). The stream channels are filled with highly permeable coarse grained materials that provide quick avenues for contaminant transport. The plume of VOCs has migrated off site in the TFA area, making it the area of greatest concern. I mapped the paleo-stream channels in the TFA area using SLICE an LLNL Auto-CADD routine. SLICE constructed 2D cross sections and sub-horizontal views of chemical, geophysical, and lithologic data sets. I interpreted these 2D views as a braided stream environment, delineating the edges of stream channels. The interpretations were extracted from Auto-CADD and placed into Earth Vision`s 3D modeling and viewing routines. Several 3D correlations have been generated, but no model has yet been chosen as a best fit.

  7. Stream function method for computing steady rotational transonic flows with application to solar wind-type problems

    International Nuclear Information System (INIS)

    Kopriva, D.A.

    1982-01-01

    A numerical scheme has been developed to solve the quasilinear form of the transonic stream function equation. The method is applied to compute steady two-dimensional axisymmetric solar wind-type problems. A single, perfect, non-dissipative, homentropic and polytropic gas-dynamics is assumed. The four equations governing mass and momentum conservation are reduced to a single nonlinear second order partial differential equation for the stream function. Bernoulli's equation is used to obtain a nonlinear algebraic relation for the density in terms of stream function derivatives. The vorticity includes the effects of azimuthal rotation and Bernoulli's function and is determined from quantities specified on boundaries. The approach is efficient. The number of equations and independent variables has been reduced and a rapid relaxation technique developed for the transonic full potential equation is used. Second order accurate central differences are used in elliptic regions. In hyperbolic regions a dissipation term motivated by the rotated differencing scheme of Jameson is added for stability. A successive-line-overrelaxation technique also introduced by Jameson is used to solve the equations. The nonlinear equation for the density is a double valued function of the stream function derivatives. The velocities are extrapolated from upwind points to determine the proper branch and Newton's method is used to iteratively compute the density. This allows accurate solutions with few grid points

  8. An economic analysis of online streaming: How the music industry can generate revenues from cloud computing

    OpenAIRE

    Thomes, Tim Paul

    2011-01-01

    This paper investigates the upcoming business model of online streaming services allowing music consumers either to subscribe to a service which provides free-of-charge access to streaming music and which is funded by advertising, or to pay a monthly flat fee in order to get ad-free access to the content of the service accompanied with additional benefits. Both businesses will be launched by a single provider of streaming music. By imposing a two-sided market model on the one hand combined wi...

  9. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    Directory of Open Access Journals (Sweden)

    Chia-Chang Hu

    2005-04-01

    Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, 𝒪(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of 𝒪((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  10. Communication and control by listening: towards optimal design of a two-class auditory streaming brain-computer interface

    Directory of Open Access Journals (Sweden)

    N. Jeremy Hill

    2012-12-01

    Full Text Available Most brain-computer interface (BCI systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two dichotically presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously-published variants provides superior performance: a fixed-phase (FP design in which the streams have equal period and opposite phase, or a drifting-phase (DP design where the periods are unequal. We found FP to be superior to DP (p = 0.002: average performance levels were 80% and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one’s eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely

  11. Communication and control by listening: toward optimal design of a two-class auditory streaming brain-computer interface.

    Science.gov (United States)

    Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin

    2012-01-01

    Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.

  12. Computer programs for the acquisition and analysis of eddy-current array probe data

    International Nuclear Information System (INIS)

    Pate, J.R.; Dodd, C.V.

    1996-07-01

    Objective of the Improved Eddy-Curent ISI (in-service inspection) for Steam Generators Tubing program is to upgrade and validate eddy-current inspections, including probes, instrumentation, and data processing techniques for ISI of new, used, and repaired steam generator tubes; to improve defect detection, classification and characterization as affected by diameter and thickness variations, denting, probe wobble, tube sheet, tube supports, copper and sludge deposits, even when defect types and other variables occur in combination; to transfer this advanced technology to NRC's mobile NDE laboratory and staff. This report documents computer programs that were developed for acquisition of eddy-current data from specially designed 16-coil array probes. Complete code as well as instructions for use are provided

  13. Brain Computer Interface Learning for Systems Based on Electrocorticography and Intracortical Microelectrode Arrays

    Directory of Open Access Journals (Sweden)

    Shivayogi V Hiremath

    2015-06-01

    Full Text Available A brain-computer interface (BCI system transforms neural activity into control signals for external devices in real time. A BCI user needs to learn to generate specific cortical activity patterns to control external devices effectively. We call this process BCI learning, and it often requires significant effort and time. Therefore, it is important to study this process and develop novel and efficient approaches to accelerate BCI learning. This article reviews major approaches that have been used for BCI learning, including computer-assisted learning, co-adaptive learning, operant conditioning, and sensory feedback. We focus on BCIs based on electrocorticography and intracortical microelectrode arrays for restoring motor function. This article also explores the possibility of brain modulation techniques in promoting BCI learning, such as electrical cortical stimulation, transcranial magnetic stimulation, and optogenetics. Furthermore, as proposed by recent BCI studies, we suggest that BCI learning is in many ways analogous to motor and cognitive skill learning, and therefore skill learning should be a useful metaphor to model BCI learning.

  14. Brain computer interface learning for systems based on electrocorticography and intracortical microelectrode arrays.

    Science.gov (United States)

    Hiremath, Shivayogi V; Chen, Weidong; Wang, Wei; Foldes, Stephen; Yang, Ying; Tyler-Kabara, Elizabeth C; Collinger, Jennifer L; Boninger, Michael L

    2015-01-01

    A brain-computer interface (BCI) system transforms neural activity into control signals for external devices in real time. A BCI user needs to learn to generate specific cortical activity patterns to control external devices effectively. We call this process BCI learning, and it often requires significant effort and time. Therefore, it is important to study this process and develop novel and efficient approaches to accelerate BCI learning. This article reviews major approaches that have been used for BCI learning, including computer-assisted learning, co-adaptive learning, operant conditioning, and sensory feedback. We focus on BCIs based on electrocorticography and intracortical microelectrode arrays for restoring motor function. This article also explores the possibility of brain modulation techniques in promoting BCI learning, such as electrical cortical stimulation, transcranial magnetic stimulation, and optogenetics. Furthermore, as proposed by recent BCI studies, we suggest that BCI learning is in many ways analogous to motor and cognitive skill learning, and therefore skill learning should be a useful metaphor to model BCI learning.

  15. Performance study of monochromatic synchrotron X-ray computed tomography using a linear array detector

    Energy Technology Data Exchange (ETDEWEB)

    Kazama, Masahiro; Takeda, Tohoru; Itai, Yuji [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine; Akiba, Masahiro; Yuasa, Tetsuya; Hyodo, Kazuyuki; Ando, Masami; Akatsuka, Takao

    1997-09-01

    Monochromatic x-ray computed tomography (CT) using synchrotron radiation (SR) is being developed for detection of non-radioactive contrast materials at low concentration for application in clinical diagnosis. A new SR-CT system with improved contrast resolution, was constructed using a linear array detector which provides wide dynamic ranges and a double monochromator. The performance of this system was evaluated in a phantom and a rat model of brain ischemia. This system consists of a silicon (111) double crystal monochromator, an x-ray shutter, an ionization chamber, x-ray slits, a scanning table for the target organ, and an x-ray linear array detector. The research was carried out at the BLNE-5A bending magnet beam line of the Tristan Accumulation Ring in KEK, Japan. In this experiment, the reconstructed image of the spatial-resolution phantom clearly showed the 1 mm holes. At 1 mm slice thickness, the above K-edge image of the phantom showed contrast resolution at the concentration of 200 {mu}g/ml iodine-based contrast materials whereas the K-edge energy subtraction image showed contrast resolution at the concentration of 500 {mu}g/ml contrast materials. The cerebral arteries filled with iodine microspheres were clearly revealed, and the ischemic regions at the right temporal lobe and frontal lobe were depicted as non-vascular regions. The measured minimal detectable concentration of iodine on the above K-edge image is about 6 times higher than the expected value of 35.3 {mu}g/ml because of the high dark current of this detector. Thus, the use of a CCD detector which is cooled by liquid nitrogen to improve the dynamic range of the detector, is being under construction. (author)

  16. Programmable stream prefetch with resource optimization

    Science.gov (United States)

    Boyle, Peter; Christ, Norman; Gara, Alan; Mawhinney, Robert; Ohmacht, Martin; Sugavanam, Krishnan

    2013-01-08

    A stream prefetch engine performs data retrieval in a parallel computing system. The engine receives a load request from at least one processor. The engine evaluates whether a first memory address requested in the load request is present and valid in a table. The engine checks whether there exists valid data corresponding to the first memory address in an array if the first memory address is present and valid in the table. The engine increments a prefetching depth of a first stream that the first memory address belongs to and fetching a cache line associated with the first memory address from the at least one cache memory device if there is not yet valid data corresponding to the first memory address in the array. The engine determines whether prefetching of additional data is needed for the first stream within its prefetching depth. The engine prefetches the additional data if the prefetching is needed.

  17. Continuous Distributed Top-k Monitoring over High-Speed Rail Data Stream in Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Hanning Wang

    2013-01-01

    Full Text Available In the environment of cloud computing, real-time mass data about high-speed rail which is based on the intense monitoring of large scale perceived equipment provides strong support for the safety and maintenance of high-speed rail. In this paper, we focus on the Top-k algorithm of continuous distribution based on Multisource distributed data stream for high-speed rail monitoring. Specifically, we formalized Top-k monitoring model of high-speed rail and proposed DTMR that is the Top-k monitoring algorithm with random, continuous, or strictly monotone aggregation functions. The DTMR was proved to be valid by lots of experiments.

  18. Computational investigation of hydrokinetic turbine arrays in an open channel using an actuator disk-LES model

    Science.gov (United States)

    Kang, Seokkoo; Yang, Xiaolei; Sotiropoulos, Fotis

    2012-11-01

    While a considerable amount of work has focused on studying the effects and performance of wind farms, very little is known about the performance of hydrokinetic turbine arrays in open channels. Unlike large wind farms, where the vertical fluxes of momentum and energy from the atmospheric boundary layer comprise the main transport mechanisms, the presence of free surface in hydrokinetic turbine arrays inhibits vertical transport. To explore this fundamental difference between wind and hydrokinetic turbine arrays, we carry out LES with the actuator disk model to systematically investigate various layouts of hydrokinetic turbine arrays mounted on the bed of a straight open channel with fully-developed turbulent flow fed at the channel inlet. Mean flow quantities and turbulence statistics within and downstream of the arrays will be analyzed and the effect of the turbine arrays as means for increasing the effective roughness of the channel bed will be extensively discussed. This work was supported by Initiative for Renewable Energy & the Environment (IREE) (Grant No. RO-0004-12), and computational resources were provided by Minnesota Supercomputing Institute.

  19. Affective three-dimensional brain-computer interface created using a prism array-based display

    Science.gov (United States)

    Mun, Sungchul; Park, Min-Chul

    2014-12-01

    To avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we applied a prism array-based display when presenting three-dimensional (3-D) objects. Emotional pictures were used as visual stimuli to increase the signal-to-noise ratios of steady-state visually evoked potentials (SSVEPs) because involuntarily motivated selective attention by affective mechanisms can enhance SSVEP amplitudes, thus producing increased interaction efficiency. Ten male and nine female participants voluntarily participated in our experiments. Participants were asked to control objects under three viewing conditions: two-dimension (2-D), stereoscopic 3-D, and prism. The participants performed each condition in a counter-balanced order. One-way repeated measures analysis of variance showed significant increases in the positive predictive values in the prism condition compared to the 2-D and 3-D conditions. Participants' subjective ratings of realness and engagement were also significantly greater in the prism condition than in the 2-D and 3-D conditions, while the ratings for visual fatigue were significantly reduced in the prism condition than in the 3-D condition. The proposed methods are expected to enhance the sense of reality in 3-D space without causing critical visual fatigue. In addition, people who are especially susceptible to stereoscopic 3-D may be able to use the affective brain-computer interface.

  20. Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array.

    Science.gov (United States)

    Navruz, Isa; Coskun, Ahmet F; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan

    2013-10-21

    We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ~9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ~3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also removes spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears.

  1. Scalar localization by cone-beam computed tomography of cochlear implant carriers: a comparative study between straight and periomodiolar precurved electrode arrays.

    Science.gov (United States)

    Boyer, Eric; Karkas, Alexandre; Attye, Arnaud; Lefournier, Virginie; Escude, Bernard; Schmerber, Sebastien

    2015-03-01

    To compare the incidence of dislocation of precurved versus straight flexible cochlear implant electrode arrays using cone-beam computed tomography (CBCT) image analyses. Consecutive nonrandomized case-comparison study. Tertiary referral center. Analyses of patients' CBCT images after cochlear implant surgery. Precurved and straight flexible electrode arrays from two different manufacturers were implanted. A round window insertion was performed in most cases. Two cases necessitated a cochleostomy. The patients' CBCT images were reconstructed in the coronal oblique, sagittal oblique, and axial oblique section. The insertion depth angle and the incidence of dislocation from the scala tympani to the scala vestibuli were determined. The CBCT images and the incidence of dislocation were analyzed in 54 patients (61 electrode arrays). Thirty-one patients were implanted with a precurved perimodiolar electrode array and 30 patients with a straight flexible electrode array. A total of nine (15%) scalar dislocations were observed in both groups. Eight (26%) scalar dislocations were observed in the precurved array group and one (3%) in the straight array group. Dislocation occurred at an insertion depth angle between 170 and 190 degrees in the precurved array group and at approximately 370 degrees in the straight array group. With precurved arrays, dislocation usually occurs in the ascending part of the basal turn of the cochlea. With straight flexible electrode arrays, the incidence of dislocation was lower, and it seems that straight flexible arrays have a higher chance of a confined position within the scala tympani than perimodiolar precurved arrays.

  2. Development and applications of a computer-aided phased array assembly for ultrasonic testing

    International Nuclear Information System (INIS)

    Schenk, G.; Montag, H.J.; Wuestenberg, H.; Erhard, A.

    1985-01-01

    The use of modern electronic equipment for programmable signal delay increasingly allows transit-time controlled phased arrays to be applied in non-destructive, ultrasonic materials testing. A phased-array assembly is described permitting fast variation of incident angle of acoustic wave and of sonic beam focus, together with numerical evaluation of measured data. Phased arrays can be optimized by adding programmable electronic equipment so that the quality of conventional designs can be achieved. Applications of the new technical improvement are explained, referring to stress corrosion cracking, turbine testing, echo tomography of welded joints. (orig./HP) [de

  3. WorkStream-- A Design Pattern for Multicore-Enabled Finite Element Computations

    KAUST Repository

    Turcksin, Bruno

    2016-08-31

    Many operations that need to be performed in modern finite element codes can be described as an operation that needs to be done independently on every cell, followed by a reduction of these local results into a global data structure. For example, matrix assembly, estimating discretization errors, or converting nodal values into data structures that can be output in visualization file formats all fall into this class of operations. Using this realization, we identify a software design pattern that we callWorkStream and that can be used to model such operations and enables the use of multicore shared memory parallel processing. We also describe in detail how this design pattern can be efficiently implemented, and we provide numerical scalability results from its use in the DEAL.II software library.

  4. Wastewater treatment using photo-impinging streams cyclone reactor: Computational fluid dynamics and kinetics modeling

    Energy Technology Data Exchange (ETDEWEB)

    Royaee, Sayed Javid; Shafeghat, Amin [Research Institute of Petroleum Industry, Tehran (Iran, Islamic Republic of); Sohrabi, Morteza [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of)

    2014-02-15

    A photo impinging streams cyclone reactor has been used as a novel apparatus in photocatalytic degradation of organic compounds using titanium dioxide nanoparticles in wastewater. The operating parameters, including catalyst loading, pH, initial phenol concentration and light intensity have been optimized to increase the efficiency of the photocatalytic degradation process within this photoreactor. The results have demonstrated a higher efficiency and an increased performance capability of the present reactor in comparison with the conventional processes. In the next step, residence time distribution (RTD) of the slurry phase within the reactor was measured using the impulse tracer method. A CFD-based model for predicting the RTD was also developed which compared well with the experimental results. The RTD data was finally applied in conjunction with the phenol degradation kinetic model to predict the apparent rate coefficient for such a reaction.

  5. Verification of computed tomographic estimates of cochlear implant array position: a micro-CT and histologic analysis.

    Science.gov (United States)

    Teymouri, Jessica; Hullar, Timothy E; Holden, Timothy A; Chole, Richard A

    2011-08-01

    To determine the efficacy of clinical computed tomographic (CT) imaging to verify postoperative electrode array placement in cochlear implant (CI) patients. Nine fresh cadaver heads underwent clinical CT scanning, followed by bilateral CI insertion and postoperative clinical CT scanning. Temporal bones were removed, trimmed, and scanned using micro-CT. Specimens were then dehydrated, embedded in either methyl methacrylate or LR White resin, and sectioned with a diamond wafering saw. Histology sections were examined by 3 blinded observers to determine the position of individual electrodes relative to soft tissue structures within the cochlea. Electrodes were judged to be within the scala tympani, scala vestibuli, or in an intermediate position between scalae. The position of the array could be estimated accurately from clinical CT scans in all specimens using micro-CT and histology as a criterion standard. Verification using micro-CT yielded 97% agreement, and histologic analysis revealed 95% agreement with clinical CT results. A composite, 3-dimensional image derived from a patient's preoperative and postoperative CT images using a clinical scanner accurately estimates the position of the electrode array as determined by micro-CT imaging and histologic analyses. Information obtained using the CT method provides valuable insight into numerous variables of interest to patient performance such as surgical technique, array design, and processor programming and troubleshooting.

  6. On the Organization of Parallel Operation of Some Algorithms for Finding the Shortest Path on a Graph on a Computer System with Multiple Instruction Stream and Single Data Stream

    Directory of Open Access Journals (Sweden)

    V. E. Podol'skii

    2015-01-01

    Full Text Available The paper considers the implementing Bellman-Ford and Lee algorithms to find the shortest graph path on a computer system with multiple instruction stream and single data stream (MISD. The MISD computer is a computer that executes commands of arithmetic-logical processing (on the CPU and commands of structures processing (on the structures processor in parallel on a single data stream. Transformation of sequential programs into the MISD programs is a labor intensity process because it requires a stream of the arithmetic-logical processing to be manually separated from that of the structures processing. Algorithms based on the processing of data structures (e.g., algorithms on graphs show high performance on a MISD computer. Bellman-Ford and Lee algorithms for finding the shortest path on a graph are representatives of these algorithms. They are applied to robotics for automatic planning of the robot movement in-situ. Modification of Bellman-Ford and Lee algorithms for finding the shortest graph path in coprocessor MISD mode and the parallel MISD modification of these algorithms were first obtained in this article. Thus, this article continues a series of studies on the transformation of sequential algorithms into MISD ones (Dijkstra and Ford-Fulkerson 's algorithms and has a pronouncedly applied nature. The article also presents the analysis results of Bellman-Ford and Lee algorithms in MISD mode. The paper formulates the basic trends of a technique for parallelization of algorithms into arithmetic-logical processing stream and structures processing stream. Among the key areas for future research, development of the mathematical approach to provide a subsequently formalized and automated process of parallelizing sequential algorithms between the CPU and structures processor is highlighted. Among the mathematical models that can be used in future studies there are graph models of algorithms (e.g., dependency graph of a program. Due to the high

  7. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  8. Optimisation of the conditions for stripping voltammetric analysis at liquid-liquid interfaces supported at micropore arrays: a computational simulation.

    Science.gov (United States)

    Strutwolf, Jörg; Arrigan, Damien W M

    2010-10-01

    Micropore membranes have been used to form arrays of microinterfaces between immiscible electrolyte solutions (µITIES) as a basis for the sensing of non-redox-active ions. Implementation of stripping voltammetry as a sensing method at these arrays of µITIES was applied recently to detect drugs and biomolecules at low concentrations. The present study uses computational simulation to investigate the optimum conditions for stripping voltammetric sensing at the µITIES array. In this scenario, the diffusion of ions in both the aqueous and the organic phases contributes to the sensing response. The influence of the preconcentration time, the micropore aspect ratio, the location of the microinterface within the pore, the ratio of the diffusion coefficients of the analyte ion in the organic and aqueous phases, and the pore wall angle were investigated. The simulations reveal that the accessibility of the microinterfaces during the preconcentration period should not be hampered by a recessed interface and that diffusional transport in the phase where the analyte ions are preconcentrated should be minimized. This will ensure that the ions are accumulated within the micropores close to the interface and thus be readily available for back transfer during the stripping process. On the basis of the results, an optimal combination of the examined parameters is proposed, which together improve the stripping voltammetric signal and provide an improvement in the detection limit.

  9. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    Science.gov (United States)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  10. Cone-beam computed tomography in children with cochlear implants: The effect of electrode array position on ECAP.

    Science.gov (United States)

    Lathuillière, Marine; Merklen, Fanny; Piron, Jean-Pierre; Sicard, Marielle; Villemus, Françoise; Menjot de Champfleur, Nicolas; Venail, Frédéric; Uziel, Alain; Mondain, Michel

    2017-01-01

    To assess the feasibility of using cone-beam computed tomography (CBCT) in young children with cochlear implants (CIs) and study the effect of intracochlear position on electrophysiological and behavioral measurements. A total of 40 children with either unilateral or bilateral cochlear implants were prospectively included in the study. Electrode placement and insertion angles were studied in 55 Cochlear ® implants (16 straight arrays and 39 perimodiolar arrays), using either CBCT or X-ray imaging. CBCT or X-ray imaging were scheduled when the children were leaving the recovery room. We recorded intraoperative and postoperative neural response telemetry threshold (T-NRT) values, intraoperative and postoperative electrode impedance values, as well as behavioral T (threshold) and C (comfort) levels on electrodes 1, 5, 10, 15 and 20. CBCT imaging was feasible without any sedation in 24 children (60%). Accidental scala vestibuli insertion was observed in 3 out of 24 implants as assessed by CBCT. The mean insertion angle was 339.7°±35.8°. The use of a perimodiolar array led to higher angles of insertion, lower postoperative T-NRT, as well as decreased behavioral T and C levels. We found no significant effect of either electrode array position or angle of insertion on electrophysiological data. CBCT appears to be a reliable tool for anatomical assessment of young children with CIs. Intracochlear position had no significant effect on the electrically evoked compound action potential (ECAP) threshold. Our CBCT protocol must be improved to increase the rate of successful investigations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Tests Of Array Of Flush Pressure Sensors

    Science.gov (United States)

    Larson, Larry J.; Moes, Timothy R.; Siemers, Paul M., III

    1992-01-01

    Report describes tests of array of pressure sensors connected to small orifices flush with surface of 1/7-scale model of F-14 airplane in wind tunnel. Part of effort to determine whether pressure parameters consisting of various sums, differences, and ratios of measured pressures used to compute accurately free-stream values of stagnation pressure, static pressure, angle of attack, angle of sideslip, and mach number. Such arrays of sensors and associated processing circuitry integrated into advanced aircraft as parts of flight-monitoring and -controlling systems.

  12. VFLOW2D - A Vorte-Based Code for Computing Flow Over Elastically Supported Tubes and Tube Arrays

    Energy Technology Data Exchange (ETDEWEB)

    WOLFE,WALTER P.; STRICKLAND,JAMES H.; HOMICZ,GREGORY F.; GOSSLER,ALBERT A.

    2000-10-11

    A numerical flow model is developed to simulate two-dimensional fluid flow past immersed, elastically supported tube arrays. This work is motivated by the objective of predicting forces and motion associated with both deep-water drilling and production risers in the oil industry. This work has other engineering applications including simulation of flow past tubular heat exchangers or submarine-towed sensor arrays and the flow about parachute ribbons. In the present work, a vortex method is used for solving the unsteady flow field. This method demonstrates inherent advantages over more conventional grid-based computational fluid dynamics. The vortex method is non-iterative, does not require artificial viscosity for stability, displays minimal numerical diffusion, can easily treat moving boundaries, and allows a greatly reduced computational domain since vorticity occupies only a small fraction of the fluid volume. A gridless approach is used in the flow sufficiently distant from surfaces. A Lagrangian remap scheme is used near surfaces to calculate diffusion and convection of vorticity. A fast multipole technique is utilized for efficient calculation of velocity from the vorticity field. The ability of the method to correctly predict lift and drag forces on simple stationary geometries over a broad range of Reynolds numbers is presented.

  13. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model.

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Lin, Chia-Hung; Li, Chien-Ming

    2018-02-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500-700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility.

  14. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Li, Chien-Ming

    2018-01-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500–700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility. PMID:29515815

  15. Computational diffraction tomographic microscopy with transport of intensity equation using a light-emitting diode array

    Science.gov (United States)

    Li, Jiaji; Chen, Qian; Zhang, Jialin; Zuo, Chao

    2017-10-01

    Optical diffraction tomography (ODT) is an effective label-free technique for quantitatively refractive index imaging, which enables long-term monitoring of the internal three-dimensional (3D) structures and molecular composition of biological cells with minimal perturbation. However, existing optical tomographic methods generally rely on interferometric configuration for phase measurement and sophisticated mechanical systems for sample rotation or beam scanning. Thereby, the measurement is suspect to phase error coming from the coherent speckle, environmental vibrations, and mechanical error during data acquisition process. To overcome these limitations, we present a new ODT technique based on non-interferometric phase retrieval and programmable illumination emitting from a light-emitting diode (LED) array. The experimental system is built based on a traditional bright field microscope, with the light source replaced by a programmable LED array, which provides angle-variable quasi-monochromatic illumination with an angular coverage of +/-37 degrees in both x and y directions (corresponding to an illumination numerical aperture of ˜ 0.6). Transport of intensity equation (TIE) is utilized to recover the phase at different illumination angles, and the refractive index distribution is reconstructed based on the ODT framework under first Rytov approximation. The missing-cone problem in ODT is addressed by using the iterative non-negative constraint algorithm, and the misalignment of the LED array is further numerically corrected to improve the accuracy of refractive index quantification. Experiments on polystyrene beads and thick biological specimens show that the proposed approach allows accurate refractive index reconstruction while greatly reduced the system complexity and environmental sensitivity compared to conventional interferometric ODT approaches.

  16. Computer analysis to the geochemical interpretation of soil and stream sediment data in an area of Southern Uruguay

    International Nuclear Information System (INIS)

    Spangenberg, J.

    2010-01-01

    In southern Uruguay there are several known occurrences of base metal sulphide mineralization within an area of Precambrian volcanic sedimentary rocks. Regional geochemical stream sediment reconnaissance surveys revealed new polymetallic anomalies in the same stratigraphic zone. Geochemical interpretation of multi-element data from a soil and stream sediment survey carried out in one of these anomalous areas is presented.

  17. Implementation and Evaluation of the Streamflow Statistics (StreamStats) Web Application for Computing Basin Characteristics and Flood Peaks in Illinois

    Science.gov (United States)

    Ishii, Audrey L.; Soong, David T.; Sharpe, Jennifer B.

    2010-01-01

    Illinois StreamStats (ILSS) is a Web-based application for computing selected basin characteristics and flood-peak quantiles based on the most recently (2010) published (Soong and others, 2004) regional flood-frequency equations at any rural stream location in Illinois. Limited streamflow statistics including general statistics, flow durations, and base flows also are available for U.S. Geological Survey (USGS) streamflow-gaging stations. ILSS can be accessed on the Web at http://streamstats.usgs.gov/ by selecting the State Applications hyperlink and choosing Illinois from the pull-down menu. ILSS was implemented for Illinois by obtaining and projecting ancillary geographic information system (GIS) coverages; populating the StreamStats database with streamflow-gaging station data; hydroprocessing the 30-meter digital elevation model (DEM) for Illinois to conform to streams represented in the National Hydrographic Dataset 1:100,000 stream coverage; and customizing the Web-based Extensible Markup Language (XML) programs for computing basin characteristics for Illinois. The basin characteristics computed by ILSS then were compared to the basin characteristics used in the published study, and adjustments were applied to the XML algorithms for slope and basin length. Testing of ILSS was accomplished by comparing flood quantiles computed by ILSS at a an approximately random sample of 170 streamflow-gaging stations computed by ILSS with the published flood quantile estimates. Differences between the log-transformed flood quantiles were not statistically significant at the 95-percent confidence level for the State as a whole, nor by the regions determined by each equation, except for region 1, in the northwest corner of the State. In region 1, the average difference in flood quantile estimates ranged from 3.76 percent for the 2-year flood quantile to 4.27 percent for the 500-year flood quantile. The total number of stations in region 1 was small (21) and the mean

  18. Real-time field programmable gate array architecture for computer vision

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar

    2001-01-01

    This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low-level image processing. The field programmable gate array (FPGA)-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and it is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on dedicated very- large-scale-integrated devices to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real-time performance are discussed. Some results are presented and discussed.

  19. A scalable quantum computer with ions in an array of microtraps

    Science.gov (United States)

    Cirac; Zoller

    2000-04-06

    Quantum computers require the storage of quantum information in a set of two-level systems (called qubits), the processing of this information using quantum gates and a means of final readout. So far, only a few systems have been identified as potentially viable quantum computer models--accurate quantum control of the coherent evolution is required in order to realize gate operations, while at the same time decoherence must be avoided. Examples include quantum optical systems (such as those utilizing trapped ions or neutral atoms, cavity quantum electrodynamics and nuclear magnetic resonance) and solid state systems (using nuclear spins, quantum dots and Josephson junctions). The most advanced candidates are the quantum optical and nuclear magnetic resonance systems, and we expect that they will allow quantum computing with about ten qubits within the next few years. This is still far from the numbers required for useful applications: for example, the factorization of a 200-digit number requires about 3,500 qubits, rising to 100,000 if error correction is implemented. Scalability of proposed quantum computer architectures to many qubits is thus of central importance. Here we propose a model for an ion trap quantum computer that combines scalability (a feature usually associated with solid state proposals) with the advantages of quantum optical systems (in particular, quantum control and long decoherence times).

  20. Computer analysis to the geochemical of soil and stream sediments data in an area of Southern Uruguay

    International Nuclear Information System (INIS)

    Spangenberg, J.

    2012-01-01

    This work is about geochemical interpretation of multi-element data from a soil and stream sediment survey carried out in Southern of Uruguay .This zone has several occurrences of metal sulphide mineralization

  1. Dependently typed array programs don’t go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2009-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  2. Dependently typed array programs don't go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2008-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  3. Wavelet subband coding of computer simulation output using the A++ array class library

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.; Zhang, H.D. [Los Alamos National Lab., NM (United States); Nuri, V. [Washington State Univ., Pullman, WA (United States). School of EECS

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using a bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.

  4. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  5. Supercomputers and parallel computation. Based on the proceedings of a workshop on progress in the use of vector and array processors organised by the Institute of Mathematics and its Applications and held in Bristol, 2-3 September 1982

    International Nuclear Information System (INIS)

    Paddon, D.J.

    1984-01-01

    This book is based on the proceedings of a conference on parallel computing held in 1982. There are 18 papers which cover the following topics: VLSI parallel architectures, the theory of parallel computing and vector and array processor computing. One paper on 'Tough Problems in Reactor Design' is indexed separately. All the contributions are on research done in the United Kingdom. Although much of the experience in array processor computing is associated with the ICL distributed array processor (DAP) and this is reflected in the contributions, the research relating to the ICL DAP is relevant to all types of array processors. (UK)

  6. Ultra-low power high precision magnetotelluric receiver array based customized computer and wireless sensor network

    Science.gov (United States)

    Chen, R.; Xi, X.; Zhao, X.; He, L.; Yao, H.; Shen, R.

    2016-12-01

    Dense 3D magnetotelluric (MT) data acquisition owns the benefit of suppressing the static shift and topography effect, can achieve high precision and high resolution inversion for underground structure. This method may play an important role in mineral exploration, geothermal resources exploration, and hydrocarbon exploration. It's necessary to reduce the power consumption greatly of a MT signal receiver for large-scale 3D MT data acquisition while using sensor network to monitor data quality of deployed MT receivers. We adopted a series of technologies to realized above goal. At first, we designed an low-power embedded computer which can couple with other parts of MT receiver tightly and support wireless sensor network. The power consumption of our embedded computer is less than 1 watt. Then we designed 4-channel data acquisition subsystem which supports 24-bit analog-digital conversion, GPS synchronization, and real-time digital signal processing. Furthermore, we developed the power supply and power management subsystem for MT receiver. At last, a series of software, which support data acquisition, calibration, wireless sensor network, and testing, were developed. The software which runs on personal computer can monitor and control over 100 MT receivers on the field for data acquisition and quality control. The total power consumption of the receiver is about 2 watts at full operation. The standby power consumption is less than 0.1 watt. Our testing showed that the MT receiver can acquire good quality data at ground with electrical dipole length as 3 m. Over 100 MT receivers were made and used for large-scale geothermal exploration in China with great success.

  7. Streams with Strahler Stream Order

    Data.gov (United States)

    Minnesota Department of Natural Resources — Stream segments with Strahler stream order values assigned. As of 01/08/08 the linework is from the DNR24K stream coverages and will not match the updated...

  8. Experiences with the ACPMAPS (Advanced Computer Program Multiple Array Processor System) 50 GFLOP system

    International Nuclear Information System (INIS)

    Fischler, M.

    1992-10-01

    The Fermilab Computer R ampersand D and Theory departments have for several years collaborated on a multi-GFLOP (recently upgraded to 50 GFLOP) system for lattice gauge calculations. The primary emphasis is on flexibility and ease of algorithm development. This system (ACPMAPS) has been in use for some time, allowing theorists to produce QCD results with relevance for the analysis of experimental data. We present general observations about benefits of such a scientist-oriented system, and summarize some of the advances recently made. We also discuss what was discovered about features needed in a useful algorithm exploration platform. These lessons can be applied to the design and evaluation of future massively parallel systems (commercial or otherwise)

  9. A Soft Computing Approach to Crack Detection and Impact Source Identification with Field-Programmable Gate Array Implementation

    Directory of Open Access Journals (Sweden)

    Arati M. Dixit

    2013-01-01

    Full Text Available The real-time nondestructive testing (NDT for crack detection and impact source identification (CDISI has attracted the researchers from diverse areas. This is apparent from the current work in the literature. CDISI has usually been performed by visual assessment of waveforms generated by a standard data acquisition system. In this paper we suggest an automation of CDISI for metal armor plates using a soft computing approach by developing a fuzzy inference system to effectively deal with this problem. It is also advantageous to develop a chip that can contribute towards real time CDISI. The objective of this paper is to report on efforts to develop an automated CDISI procedure and to formulate a technique such that the proposed method can be easily implemented on a chip. The CDISI fuzzy inference system is developed using MATLAB’s fuzzy logic toolbox. A VLSI circuit for CDISI is developed on basis of fuzzy logic model using Verilog, a hardware description language (HDL. The Xilinx ISE WebPACK9.1i is used for design, synthesis, implementation, and verification. The CDISI field-programmable gate array (FPGA implementation is done using Xilinx’s Spartan 3 FPGA. SynaptiCAD’s Verilog Simulators—VeriLogger PRO and ModelSim—are used as the software simulation and debug environment.

  10. A computational study for investigating acoustic streaming and tissue heating during high intensity focused ultrasound through blood vessel with an obstacle

    Science.gov (United States)

    Parvin, Salma; Sultana, Aysha

    2017-06-01

    The influence of High Intensity Focused Ultrasound (HIFU) on the obstacle through blood vessel is studied numerically. A three-dimensional acoustics-thermal-fluid coupling model is employed to compute the temperature field around the obstacle through blood vessel. The model construction is based on the linear Westervelt and conjugate heat transfer equations for the obstacle through blood vessel. The system of equations is solved using Finite Element Method (FEM). We found from this three-dimensional numerical study that the rate of heat transfer is increasing from the obstacle and both the convective cooling and acoustic streaming can considerably change the temperature field.

  11. Stream Crossings

    Data.gov (United States)

    Vermont Center for Geographic Information — Physical measurements and attributes of stream crossing structures and adjacent stream reaches which are used to provide a relative rating of aquatic organism...

  12. Data streams: algorithms and applications

    National Research Council Canada - National Science Library

    Muthukrishnan, S

    2005-01-01

    ... massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [175]. S. Muthukrishnan Rutgers University, New Brunswick, NJ, USA, muthu@cs...

  13. Akamai Streaming

    OpenAIRE

    ECT Team, Purdue

    2007-01-01

    Akamai offers world-class streaming media services that enable Internet content providers and enterprises to succeed in today's Web-centric marketplace. They deliver live event Webcasts (complete with video production, encoding, and signal acquisition services), streaming media on demand, 24/7 Webcasts and a variety of streaming application services based upon their EdgeAdvantage.

  14. Registration of human skull computed tomography data to an ultrasound treatment space using a sparse high frequency ultrasound hemispherical array

    Energy Technology Data Exchange (ETDEWEB)

    O’Reilly, Meaghan A., E-mail: moreilly@sri.utoronto.ca; Jones, Ryan M. [Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 1L7 (Canada); Birman, Gabriel [Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada); Hynynen, Kullervo [Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 1L7 (Canada); Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3G9 (Canada)

    2016-09-15

    Purpose: Transcranial focused ultrasound (FUS) shows great promise for a range of therapeutic applications in the brain. Current clinical investigations rely on the use of magnetic resonance imaging (MRI) to monitor treatments and for the registration of preoperative computed tomography (CT)-data to the MR images at the time of treatment to correct the sound aberrations caused by the skull. For some applications, MRI is not an appropriate choice for therapy monitoring and its cost may limit the accessibility of these treatments. An alternative approach, using high frequency ultrasound measurements to localize the skull surface and register CT data to the ultrasound treatment space, for the purposes of skull-related phase aberration correction and treatment targeting, has been developed. Methods: A prototype high frequency, hemispherical sparse array was fabricated. Pulse-echo measurements of the surface of five ex vivo human skulls were made, and the CT datasets of each skull were obtained. The acoustic data were used to rigidly register the CT-derived skull surface to the treatment space. The ultrasound-based registrations of the CT datasets were compared to the gold-standard landmark-based registrations. Results: The results show on an average sub-millimeter (0.9 ± 0.2 mm) displacement and subdegree (0.8° ± 0.4°) rotation registration errors. Numerical simulations predict that registration errors on this scale will result in a mean targeting error of 1.0 ± 0.2 mm and reduction in focal pressure of 1.0% ± 0.6% when targeting a midbrain structure (e.g., hippocampus) using a commercially available low-frequency brain prototype device (InSightec, 230 kHz brain system). Conclusions: If combined with ultrasound-based treatment monitoring techniques, this registration method could allow for the development of a low-cost transcranial FUS treatment platform to make this technology more widely available.

  15. Registration of human skull computed tomography data to an ultrasound treatment space using a sparse high frequency ultrasound hemispherical array.

    Science.gov (United States)

    O'Reilly, Meaghan A; Jones, Ryan M; Birman, Gabriel; Hynynen, Kullervo

    2016-09-01

    Transcranial focused ultrasound (FUS) shows great promise for a range of therapeutic applications in the brain. Current clinical investigations rely on the use of magnetic resonance imaging (MRI) to monitor treatments and for the registration of preoperative computed tomography (CT)-data to the MR images at the time of treatment to correct the sound aberrations caused by the skull. For some applications, MRI is not an appropriate choice for therapy monitoring and its cost may limit the accessibility of these treatments. An alternative approach, using high frequency ultrasound measurements to localize the skull surface and register CT data to the ultrasound treatment space, for the purposes of skull-related phase aberration correction and treatment targeting, has been developed. A prototype high frequency, hemispherical sparse array was fabricated. Pulse-echo measurements of the surface of five ex vivo human skulls were made, and the CT datasets of each skull were obtained. The acoustic data were used to rigidly register the CT-derived skull surface to the treatment space. The ultrasound-based registrations of the CT datasets were compared to the gold-standard landmark-based registrations. The results show on an average sub-millimeter (0.9 ± 0.2 mm) displacement and subdegree (0.8° ± 0.4°) rotation registration errors. Numerical simulations predict that registration errors on this scale will result in a mean targeting error of 1.0 ± 0.2 mm and reduction in focal pressure of 1.0% ± 0.6% when targeting a midbrain structure (e.g., hippocampus) using a commercially available low-frequency brain prototype device (InSightec, 230 kHz brain system). If combined with ultrasound-based treatment monitoring techniques, this registration method could allow for the development of a low-cost transcranial FUS treatment platform to make this technology more widely available.

  16. Analysis of hydraulic characteristics for stream diversion in small stream

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang-Jin; Jun, Kye-Won [Chungbuk National University, Cheongju(Korea)

    2001-10-31

    This study is the analysis of hydraulic characteristics for stream diversion reach by numerical model test. Through it we can provide the basis data in flood, and in grasping stream flow characteristics. Analysis of hydraulic characteristics in Seoknam stream were implemented by using computer model HEC-RAS(one-dimensional model) and RMA2(two-dimensional finite element model). As a result we became to know that RMA2 to simulate left, main channel, right in stream is more effective method in analysing flow in channel bends, steep slope, complex bed form effect stream flow characteristics, than HEC-RAS. (author). 13 refs., 3 tabs., 5 figs.

  17. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

    Energy Technology Data Exchange (ETDEWEB)

    Robert W. Numrich

    2008-04-22

    The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to

  18. A computational modeling approach of the jet-like acoustic streaming and heat generation induced by low frequency high power ultrasonic horn reactors.

    Science.gov (United States)

    Trujillo, Francisco Javier; Knoerzer, Kai

    2011-11-01

    High power ultrasound reactors have gained a lot of interest in the food industry given the effects that can arise from ultrasonic-induced cavitation in liquid foods. However, most of the new food processing developments have been based on empirical approaches. Thus, there is a need for mathematical models which help to understand, optimize, and scale up ultrasonic reactors. In this work, a computational fluid dynamics (CFD) model was developed to predict the acoustic streaming and induced heat generated by an ultrasonic horn reactor. In the model it is assumed that the horn tip is a fluid inlet, where a turbulent jet flow is injected into the vessel. The hydrodynamic momentum rate of the incoming jet is assumed to be equal to the total acoustic momentum rate emitted by the acoustic power source. CFD velocity predictions show excellent agreement with the experimental data for power densities higher than W(0)/V ≥ 25kWm(-3). This model successfully describes hydrodynamic fields (streaming) generated by low-frequency-high-power ultrasound. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  20. The study of Kruskal's and Prim's algorithms on the Multiple Instruction and Single Data stream computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2015-01-01

    Full Text Available Bauman Moscow State Technical University is implementing a project to develop operating principles of computer system having radically new architecture. A developed working model of the system allowed us to evaluate an efficiency of developed hardware and software. The experimental results presented in previous studies, as well as the analysis of operating principles of new computer system permit to draw conclusions regarding its efficiency in solving discrete optimization problems related to processing of sets.The new architecture is based on a direct hardware support of operations of discrete mathematics, which is reflected in using the special facilities for processing of sets and data structures. Within the framework of the project a special device was designed, i.e. a structure processor (SP, which improved the performance, without limiting the scope of applications of such a computer system.The previous works presented the basic principles of the computational process organization in MISD (Multiple Instructions, Single Data system, showed the structure and features of the structure processor and the general principles to solve discrete optimization problems on graphs.This paper examines two search algorithms of the minimum spanning tree, namely Kruskal's and Prim's algorithms. It studies the implementations of algorithms for two SP operation modes: coprocessor mode and MISD one. The paper presents results of experimental comparison of MISD system performance in coprocessor mode with mainframes.

  1. Insertion characteristics and placement of the Mid-Scala electrode array in human temporal bones using detailed cone beam computed tomography.

    Science.gov (United States)

    Dietz, Aarno; Gazibegovic, Dzemal; Tervaniemi, Jyrki; Vartiainen, Veli-Matti; Löppönen, Heikki

    2016-12-01

    The aim of this study was to evaluate the insertion results and placement of the new Advanced Bionics HiFocus Mid-Scala (HFms) electrode array, inserted through the round window membrane, in eight fresh human temporal bones using cone beam computed tomography (CBCT). Pre- and post-insertion CBCT scans were registered to create a 3D reconstruction of the cochlea with the array inserted. With an image fusion technique both the bony edges of the cochlea and the electrode array in situ could accurately be determined, thus enabling to identify the exact position of the electrode array within the scala tympani. Vertical and horizontal scalar location was measured at four points along the cochlea base at an angular insertion depth of 90°, 180° and 270° and at electrode 16, the most basal electrode. Smooth insertion through the round window membrane was possible in all temporal bones. The imaging results showed that there were no dislocations from the scala tympani into the scala vestibule. The HFms electrode was positioned in the middle of the scala along the whole electrode array in three out of the eight bones and in 62 % of the individual locations measured along the base of the cochlea. In only one cochlea a close proximity of the electrode with the basilar membrane was observed, indicating possible contact with the basilar membrane. The results and assessments presented in this study appear to be highly accurate. Although a further validation including histopathology is needed, the image fusion technique described in this study represents currently the most accurate method for intracochlear electrode assessment obtainable with CBCT.

  2. Signal Space Separation Method for a Biomagnetic Sensor Array Arranged on a Flat Plane for Magnetocardiographic Applications: A Computer Simulation Study

    Science.gov (United States)

    2018-01-01

    Although the signal space separation (SSS) method can successfully suppress interference/artifacts overlapped onto magnetoencephalography (MEG) signals, the method is considered inapplicable to data from nonhelmet-type sensor arrays, such as the flat sensor arrays typically used in magnetocardiographic (MCG) applications. This paper shows that the SSS method is still effective for data measured from a (nonhelmet-type) array of sensors arranged on a flat plane. By using computer simulations, it is shown that the optimum location of the origin can be determined by assessing the dependence of signal and noise gains of the SSS extractor on the origin location. The optimum values of the parameters LC and LD, which, respectively, indicate the truncation values of the multipole-order ℓ of the internal and external subspaces, are also determined by evaluating dependences of the signal, noise, and interference gains (i.e., the shield factor) on these parameters. The shield factor exceeds 104 for interferences originating from fairly distant sources. However, the shield factor drops to approximately 100 when calibration errors of 0.1% exist and to 30 when calibration errors of 1% exist. The shielding capability can be significantly improved using vector sensors, which measure the x, y, and z components of the magnetic field. With 1% calibration errors, a vector sensor array still maintains a shield factor of approximately 500. It is found that the SSS application to data from flat sensor arrays causes a distortion in the signal magnetic field, but it is shown that the distortion can be corrected by using an SSS-modified sensor lead field in the voxel space analysis. PMID:29854364

  3. Signal Space Separation Method for a Biomagnetic Sensor Array Arranged on a Flat Plane for Magnetocardiographic Applications: A Computer Simulation Study

    Directory of Open Access Journals (Sweden)

    Kensuke Sekihara

    2018-01-01

    Full Text Available Although the signal space separation (SSS method can successfully suppress interference/artifacts overlapped onto magnetoencephalography (MEG signals, the method is considered inapplicable to data from nonhelmet-type sensor arrays, such as the flat sensor arrays typically used in magnetocardiographic (MCG applications. This paper shows that the SSS method is still effective for data measured from a (nonhelmet-type array of sensors arranged on a flat plane. By using computer simulations, it is shown that the optimum location of the origin can be determined by assessing the dependence of signal and noise gains of the SSS extractor on the origin location. The optimum values of the parameters LC and LD, which, respectively, indicate the truncation values of the multipole-order ℓ of the internal and external subspaces, are also determined by evaluating dependences of the signal, noise, and interference gains (i.e., the shield factor on these parameters. The shield factor exceeds 104 for interferences originating from fairly distant sources. However, the shield factor drops to approximately 100 when calibration errors of 0.1% exist and to 30 when calibration errors of 1% exist. The shielding capability can be significantly improved using vector sensors, which measure the x, y, and z components of the magnetic field. With 1% calibration errors, a vector sensor array still maintains a shield factor of approximately 500. It is found that the SSS application to data from flat sensor arrays causes a distortion in the signal magnetic field, but it is shown that the distortion can be corrected by using an SSS-modified sensor lead field in the voxel space analysis.

  4. Stream Lifetimes Against Planetary Encounters

    Science.gov (United States)

    Valsecchi, G. B.; Lega, E.; Froeschle, Cl.

    2011-01-01

    We study, both analytically and numerically, the perturbation induced by an encounter with a planet on a meteoroid stream. Our analytical tool is the extension of pik s theory of close encounters, that we apply to streams described by geocentric variables. The resulting formulae are used to compute the rate at which a stream is dispersed by planetary encounters into the sporadic background. We have verified the accuracy of the analytical model using a numerical test.

  5. Stream systems.

    Science.gov (United States)

    Jack E. Williams; Gordon H. Reeves

    2006-01-01

    Restored, high-quality streams provide innumerable benefits to society. In the Pacific Northwest, high-quality stream habitat often is associated with an abundance of salmonid fishes such as chinook salmon (Oncorhynchus tshawytscha), coho salmon (O. kisutch), and steelhead (O. mykiss). Many other native...

  6. National Uranium Resource Evaluation Program. Hydrogeochemical and Stream Sediment Reconnaissance Basic Data Reports Computer Program Requests Manual

    International Nuclear Information System (INIS)

    1980-01-01

    This manual is intended to aid those who are unfamiliar with ordering computer output for verification and preparation of Uranium Resource Evaluation (URE) Project reconnaissance basic data reports. The manual is also intended to help standardize the procedures for preparing the reports. Each section describes a program or group of related programs. The sections are divided into three parts: Purpose, Request Forms, and Requested Information

  7. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  8. Computing time-series suspended-sediment concentrations and loads from in-stream turbidity-sensor and streamflow data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Doug; Ziegler, Andrew C.

    2010-01-01

    Over the last decade, use of a method for computing suspended-sediment concentration and loads using turbidity sensors—primarily nephelometry, but also optical backscatter—has proliferated. Because an in- itu turbidity sensor is capa le of measuring turbidity instantaneously, a turbidity time series can be recorded and related directly to time-varying suspended-sediment concentrations. Depending on the suspended-sediment characteristics of the measurement site, this method can be more reliable and, in many cases, a more accurate means for computing suspended-sediment concentrations and loads than traditional U.S. Geological Survey computational methods. Guidelines and procedures for estimating time s ries of suspended-sediment concentration and loading as a function of turbidity and streamflow data have been published in a U.S. Geological Survey Techniques and Methods Report, Book 3, Chapter C4. This paper is a summary of these guidelines and discusses some of the concepts, s atistical procedures, and techniques used to maintain a multiyear suspended sediment time series.

  9. Sonography of the chest using linear-array versus sector transducers: Correlation with auscultation, chest radiography, and computed tomography.

    Science.gov (United States)

    Tasci, Ozlem; Hatipoglu, Osman Nuri; Cagli, Bekir; Ermis, Veli

    2016-07-08

    The primary purpose of our study was to compare the efficacies of two sonographic (US) probes, a high-frequency linear-array probe and a lower-frequency phased-array sector probe in the diagnosis of basic thoracic pathologies. The secondary purpose was to compare the diagnostic performance of thoracic US with auscultation and chest radiography (CXR) using thoracic CT as a gold standard. In total, 55 consecutive patients scheduled for thoracic CT were enrolled in this prospective study. Four pathologic entities were evaluated: pneumothorax, pleural effusion, consolidation, and interstitial syndrome. A portable US scanner was used with a 5-10-MHz linear-array probe and a 1-5-MHz phased-array sector probe. The first probe used was chosen randomly. US, CXR, and auscultation results were compared with the CT results. The linear-array probe had the highest performance in the identification of pneumothorax (83% sensitivity, 100% specificity, and 99% diagnostic accuracy) and pleural effusion (100% sensitivity, 97% specificity, and 98% diagnostic accuracy); the sector probe had the highest performance in the identification of consolidation (89% sensitivity, 100% specificity, and 95% diagnostic accuracy) and interstitial syndrome (94% sensitivity, 93% specificity, and 94% diagnostic accuracy). For all pathologies, the performance of US was superior to those of CXR and auscultation. The linear probe is superior to the sector probe for identifying pleural pathologies, whereas the sector probe is superior to the linear probe for identifying parenchymal pathologies. Thoracic US has better diagnostic performance than CXR and auscultation for the diagnosis of common pathologic conditions of the chest. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:383-389, 2016. © 2016 Wiley Periodicals, Inc.

  10. STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Geoffrey [Indiana Univ., Bloomington, IN (United States); Jha, Shantenu [Rutgers Univ., New Brunswick, NJ (United States); Ramakrishnan, Lavanya [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-10-01

    The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), were conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report

  11. Testing of focal plane arrays

    International Nuclear Information System (INIS)

    Merriam, J.D.

    1988-01-01

    Problems associated with the testing of focal plane arrays are briefly examined with reference to the instrumentation and measurement procedures. In particular, the approach and instrumentation used as the Naval Ocean Systems Center is presented. Most of the measurements are made with flooded illumination on the focal plane array. The array is treated as an ensemble of individual pixels, data being taken on each pixel and array averages and standard deviations computed for the entire array. Data maps are generated, showing the pixel data in the proper spatial position on the array and the array statistics

  12. The use of wavenumber normalization in computing spatially averaged coherencies (KRSPAC) of microtremor data from asymmetric arrays

    Science.gov (United States)

    Asten, M.W.; Stephenson, William J.; Hartzell, Stephen

    2015-01-01

    The SPAC method of processing microtremor noise observations for estimation of Vs profiles has a limitation that the array has circular or triangular symmetry in order to allow spatial (azimuthal) averaging of inter-station coherencies over a constant station separation. Common processing methods allow for station separations to vary by typically ±10% in the azimuthal averaging before degradation of the SPAC spectrum is excessive. A limitation on use of high-wavenumbers in inversions of SPAC spectra to Vs profiles has been the requirement for exact array symmetry to avoid loss of information in the azimuthal averaging step. In this paper we develop a new wavenumber-normalised SPAC method (KRSPAC) where instead of performing averaging of sets of coherency versus frequency spectra and then fitting to a model SPAC spectrum, we interpolate each spectrum to coherency versus k.r, where k and r are wavenumber and station separation respectively, and r may be different for each pair of stations. For fundamental mode Rayleigh-wave energy the model SPAC spectrum to be fitted reduces to Jo(kr). The normalization process changes with each iteration since k is a function of frequency and phase velocity and hence is updated each iteration. The method proves robust and is demonstrated on data acquired in the Santa Clara Valley, CA, (Site STGA) where an asymmetric array having station separations varying by a factor of 2 is compared with a conventional triangular array; a 300-mdeep borehole with a downhole Vs log provides nearby ground truth. The method is also demonstrated on data from the Pleasanton array, CA, where station spacings are irregular and vary from 400 to 1200 m. The KRSPAC method allows inversion of data using kr (unitless) values routinely up to 30, and occasionally up to 60. Thus despite the large and irregular station spacings, this array permits resolution of Vs as fine as 15 m for the near-surface sediments, and down to a maximum depth of 2.5 km.

  13. RStorm: Developing and Testing Streaming Algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  14. RStorm : Developing and testing streaming algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  15. Stream Evaluation

    Data.gov (United States)

    Kansas Data Access and Support Center — Digital representation of the map accompanying the "Kansas stream and river fishery resource evaluation" (R.E. Moss and K. Brunson, 1981.U.S. Fish and Wildlife...

  16. Computationally assisted screening and design of cell-interactive peptides by a cell-based assay using peptide arrays and a fuzzy neural network algorithm.

    Science.gov (United States)

    Kaga, Chiaki; Okochi, Mina; Tomita, Yasuyuki; Kato, Ryuji; Honda, Hiroyuki

    2008-03-01

    We developed a method of effective peptide screening that combines experiments and computational analysis. The method is based on the concept that screening efficiency can be enhanced from even limited data by use of a model derived from computational analysis that serves as a guide to screening and combining the model with subsequent repeated experiments. Here we focus on cell-adhesion peptides as a model application of this peptide-screening strategy. Cell-adhesion peptides were screened by use of a cell-based assay of a peptide array. Starting with the screening data obtained from a limited, random 5-mer library (643 sequences), a rule regarding structural characteristics of cell-adhesion peptides was extracted by fuzzy neural network (FNN) analysis. According to this rule, peptides with unfavored residues in certain positions that led to inefficient binding were eliminated from the random sequences. In the restricted, second random library (273 sequences), the yield of cell-adhesion peptides having an adhesion rate more than 1.5-fold to that of the basal array support was significantly high (31%) compared with the unrestricted random library (20%). In the restricted third library (50 sequences), the yield of cell-adhesion peptides increased to 84%. We conclude that a repeated cycle of experiments screening limited numbers of peptides can be assisted by the rule-extracting feature of FNN.

  17. Stream Deniable-Encryption Algorithms

    Directory of Open Access Journals (Sweden)

    N.A. Moldovyan

    2016-04-01

    Full Text Available A method for stream deniable encryption of secret message is proposed, which is computationally indistinguishable from the probabilistic encryption of some fake message. The method uses generation of two key streams with some secure block cipher. One of the key streams is generated depending on the secret key and the other one is generated depending on the fake key. The key streams are mixed with the secret and fake data streams so that the output ciphertext looks like the ciphertext produced by some probabilistic encryption algorithm applied to the fake message, while using the fake key. When the receiver or/and sender of the ciphertext are coerced to open the encryption key and the source message, they open the fake key and the fake message. To disclose their lie the coercer should demonstrate possibility of the alternative decryption of the ciphertext, however this is a computationally hard problem.

  18. Reconfigurable Multicore Architectures for Streaming Applications

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Kokkeler, Andre B.J.; Rauwerda, G.K.; Jacobs, J.W.M.; Nicolescu, G.; Mosterman, P.J.

    2009-01-01

    This chapter addresses reconfigurable heterogenous and homogeneous multicore system-on-chip (SoC) platforms for streaming digital signal processing applications, also called DSP applications. In streaming DSP applications, computations can be specified as a data flow graph with streams of data items

  19. SCORE-EVET: a computer code for the multidimensional transient thermal-hydraulic analysis of nuclear fuel rod arrays

    International Nuclear Information System (INIS)

    Benedetti, R.L.; Lords, L.V.; Kiser, D.M.

    1978-02-01

    The SCORE-EVET code was developed to study multidimensional transient fluid flow in nuclear reactor fuel rod arrays. The conservation equations used were derived by volume averaging the transient compressible three-dimensional local continuum equations in Cartesian coordinates. No assumptions associated with subchannel flow have been incorporated into the derivation of the conservation equations. In addition to the three-dimensional fluid flow equations, the SCORE-EVET code ocntains: (a) a one-dimensional steady state solution scheme to initialize the flow field, (b) steady state and transient fuel rod conduction models, and (c) comprehensive correlation packages to describe fluid-to-fuel rod interfacial energy and momentum exchange. Velocity and pressure boundary conditions can be specified as a function of time and space to model reactor transient conditions such as a hypothesized loss-of-coolant accident (LOCA) or flow blockage

  20. Design of a linear detector array unit for high energy x-ray helical computed tomography and linear scanner

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong Tae; Park, Jong Hwan; Kim, Gi Yoon [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of); Kim, Dong Geun [Medical Imaging Department, ASTEL Inc., Seongnam (Korea, Republic of); Park, Shin Woong; Yi, Yun [Dept. of Electronics and Information Eng, Korea University, Seoul (Korea, Republic of); Kim, Hyun Duk [Research Center, Luvantix ADM Co., Ltd., Daejeon (Korea, Republic of)

    2016-11-15

    A linear detector array unit (LdAu) was proposed and designed for the high energy X-ray 2-d and 3-d imaging systems for industrial non-destructive test. Specially for 3-d imaging, a helical CT with a 15 MeV linear accelerator and a curved detector is proposed. the arc-shape detector can be formed by many LdAus all of which are arranged to face the focal spot when the source-to-detector distance is fixed depending on the application. An LdAu is composed of 10 modules and each module has 48 channels of CdWO{sub 4} (CWO) blocks and Si PIn photodiodes with 0.4 mm pitch. this modular design was made for easy manufacturing and maintenance. through the Monte carlo simulation, the CWO detector thickness of 17 mm was optimally determined. the silicon PIn photodiodes were designed as 48 channel arrays and fabricated with NTD (neutron transmutation doping) wafers of high resistivity and showed excellent leakage current properties below 1 nA at 10 V reverse bias. to minimize the low-voltage breakdown, the edges of the active layer and the guard ring were designed as a curved shape. the data acquisition system was also designed and fabricated as three independent functional boards; a sensor board, a capture board and a communication board to a Pc. this paper describes the design of the detectors (CWO blocks and Si PIn photodiodes) and the 3-board data acquisition system with their simulation results.

  1. Programmable cellular arrays. Faults testing and correcting in cellular arrays

    International Nuclear Information System (INIS)

    Cercel, L.

    1978-03-01

    A review of some recent researches about programmable cellular arrays in computing and digital processing of information systems is presented, and includes both combinational and sequential arrays, with full arbitrary behaviour, or which can realize better implementations of specialized blocks as: arithmetic units, counters, comparators, control systems, memory blocks, etc. Also, the paper presents applications of cellular arrays in microprogramming, in implementing of a specialized computer for matrix operations, in modeling of universal computing systems. The last section deals with problems of fault testing and correcting in cellular arrays. (author)

  2. Dynamical modeling of tidal streams

    International Nuclear Information System (INIS)

    Bovy, Jo

    2014-01-01

    I present a new framework for modeling the dynamics of tidal streams. The framework consists of simple models for the initial action-angle distribution of tidal debris, which can be straightforwardly evolved forward in time. Taking advantage of the essentially one-dimensional nature of tidal streams, the transformation to position-velocity coordinates can be linearized and interpolated near a small number of points along the stream, thus allowing for efficient computations of a stream's properties in observable quantities. I illustrate how to calculate the stream's average location (its 'track') in different coordinate systems, how to quickly estimate the dispersion around its track, and how to draw mock stream data. As a generative model, this framework allows one to compute the full probability distribution function and marginalize over or condition it on certain phase-space dimensions as well as convolve it with observational uncertainties. This will be instrumental in proper data analysis of stream data. In addition to providing a computationally efficient practical tool for modeling the dynamics of tidal streams, the action-angle nature of the framework helps elucidate how the observed width of the stream relates to the velocity dispersion or mass of the progenitor, and how the progenitors of 'orphan' streams could be located. The practical usefulness of the proposed framework crucially depends on the ability to calculate action-angle variables for any orbit in any gravitational potential. A novel method for calculating actions, frequencies, and angles in any static potential using a single orbit integration is described in the Appendix.

  3. Highly Parallel Computing Architectures by using Arrays of Quantum-dot Cellular Automata (QCA): Opportunities, Challenges, and Recent Results

    Science.gov (United States)

    Fijany, Amir; Toomarian, Benny N.

    2000-01-01

    There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA

  4. Towards a cyber-physical era: soft computing framework based multi-sensor array for water quality monitoring

    Directory of Open Access Journals (Sweden)

    J. Bhardwaj

    2018-02-01

    Full Text Available New concepts and techniques are replacing traditional methods of water quality parameter measurement systems. This paper introduces a cyber-physical system (CPS approach for water quality assessment in a distribution network. Cyber-physical systems with embedded sensors, processors and actuators can be designed to sense and interact with the water environment. The proposed CPS is comprised of sensing framework integrated with five different water quality parameter sensor nodes and soft computing framework for computational modelling. Soft computing framework utilizes the applications of Python for user interface and fuzzy sciences for decision making. Introduction of multiple sensors in a water distribution network generates a huge number of data matrices, which are sometimes highly complex, difficult to understand and convoluted for effective decision making. Therefore, the proposed system framework also intends to simplify the complexity of obtained sensor data matrices and to support decision making for water engineers through a soft computing framework. The target of this proposed research is to provide a simple and efficient method to identify and detect presence of contamination in a water distribution network using applications of CPS.

  5. Towards a cyber-physical era: soft computing framework based multi-sensor array for water quality monitoring

    Science.gov (United States)

    Bhardwaj, Jyotirmoy; Gupta, Karunesh K.; Gupta, Rajiv

    2018-02-01

    New concepts and techniques are replacing traditional methods of water quality parameter measurement systems. This paper introduces a cyber-physical system (CPS) approach for water quality assessment in a distribution network. Cyber-physical systems with embedded sensors, processors and actuators can be designed to sense and interact with the water environment. The proposed CPS is comprised of sensing framework integrated with five different water quality parameter sensor nodes and soft computing framework for computational modelling. Soft computing framework utilizes the applications of Python for user interface and fuzzy sciences for decision making. Introduction of multiple sensors in a water distribution network generates a huge number of data matrices, which are sometimes highly complex, difficult to understand and convoluted for effective decision making. Therefore, the proposed system framework also intends to simplify the complexity of obtained sensor data matrices and to support decision making for water engineers through a soft computing framework. The target of this proposed research is to provide a simple and efficient method to identify and detect presence of contamination in a water distribution network using applications of CPS.

  6. A Field Programmable Gate Array-Based Reconfigurable Smart-Sensor Network for Wireless Monitoring of New Generation Computer Numerically Controlled Machines

    Directory of Open Access Journals (Sweden)

    Ion Stiharu

    2010-08-01

    Full Text Available Computer numerically controlled (CNC machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA-based sensor node.

  7. A Field Programmable Gate Array-Based Reconfigurable Smart-Sensor Network for Wireless Monitoring of New Generation Computer Numerically Controlled Machines

    Science.gov (United States)

    Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node. PMID:22163602

  8. Numerical analysis of ALADIN optics contamination due to outgassing of solar array materials

    International Nuclear Information System (INIS)

    Markelov, G; Endemann, M; Wernham, D

    2008-01-01

    ALADIN is the very first space-based lidar that will provide global wind profile and a special attention has been paid to contamination of ALADIN optics. The paper presents a numerical approach, which is based on the direct simulation Monte Carlo method. The method allows one to accurately compute collisions between various species, in the case under consideration, free-stream flow and outgassing from solar array materials. The collisions create a contamination flux onto the optics despite there is no line-of-sight from the solar arrays to the optics. Comparison of obtained results with a simple analytical model prediction shows that the analytical model underpredicts mass fluxes

  9. Numerical analysis of ALADIN optics contamination due to outgassing of solar array materials

    Energy Technology Data Exchange (ETDEWEB)

    Markelov, G [Advanced Operations and Engineering Services (AOES) Group BV, Postbus 342, 2300 AH Leiden (Netherlands); Endemann, M [ESA-ESTEC/EOP-PAS, Postbus 299, 2200 AG Noordwijk (Netherlands); Wernham, D [ESA-ESTEC/EOP-PAQ, Postbus 299, 2200 AG Noordwijk (Netherlands)], E-mail: Gennady.Markelov@aoes.com

    2008-03-01

    ALADIN is the very first space-based lidar that will provide global wind profile and a special attention has been paid to contamination of ALADIN optics. The paper presents a numerical approach, which is based on the direct simulation Monte Carlo method. The method allows one to accurately compute collisions between various species, in the case under consideration, free-stream flow and outgassing from solar array materials. The collisions create a contamination flux onto the optics despite there is no line-of-sight from the solar arrays to the optics. Comparison of obtained results with a simple analytical model prediction shows that the analytical model underpredicts mass fluxes.

  10. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  11. 3D non-destructive fluorescent X-ray computed tomography (FXCT) with a CdTe array

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Chang Yeon; Lee, Won Ho; Kim, Young Hak [Dept. of Bio-convergence Engineering, Korea University Graduate School, Seoul (Korea, Republic of)

    2015-10-15

    In our research, the material was exposed to an X-ray and not only the conventional transmission image but also 3D images based on the information of characteristic X-ray detected by a 2D CdTe planar detector array were reconstructed. Since atoms have their own characteristic X-ray energy, our system was able to discriminate materials of even a same density if the materials were composed of different atomic numbers. We applied FXCT to distinguish various unknown materials with similar densities. The materials with similar densities were clearly distinguished in the 3D reconstructed images based on the information of the detected characteristic X-ray, while they were not discriminated from each other in the images based on the information of the detected transmission X-ray. In the fused images consisting of 3D transmitted and characteristic X-ray images, all of the positions, densities and atomic numbers of materials enclosed in plastic phantom or pipe were clearly identified by analyzing energy, position and amount of detected radiation.

  12. Knowledge discovery from data streams

    CERN Document Server

    Gama, Joao

    2010-01-01

    Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams.The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks,

  13. ISS Solar Array Management

    Science.gov (United States)

    Williams, James P.; Martin, Keith D.; Thomas, Justin R.; Caro, Samuel

    2010-01-01

    The International Space Station (ISS) Solar Array Management (SAM) software toolset provides the capabilities necessary to operate a spacecraft with complex solar array constraints. It monitors spacecraft telemetry and provides interpretations of solar array constraint data in an intuitive manner. The toolset provides extensive situational awareness to ensure mission success by analyzing power generation needs, array motion constraints, and structural loading situations. The software suite consists of several components including samCS (constraint set selector), samShadyTimers (array shadowing timers), samWin (visualization GUI), samLock (array motion constraint computation), and samJet (attitude control system configuration selector). It provides high availability and uptime for extended and continuous mission support. It is able to support two-degrees-of-freedom (DOF) array positioning and supports up to ten simultaneous constraints with intuitive 1D and 2D decision support visualizations of constraint data. Display synchronization is enabled across a networked control center and multiple methods for constraint data interpolation are supported. Use of this software toolset increases flight safety, reduces mission support effort, optimizes solar array operation for achieving mission goals, and has run for weeks at a time without issues. The SAM toolset is currently used in ISS real-time mission operations.

  14. A High-Throughput Computational Framework for Identifying Significant Copy Number Aberrations from Array Comparative Genomic Hybridisation Data

    Directory of Open Access Journals (Sweden)

    Ian Roberts

    2012-01-01

    Full Text Available Reliable identification of copy number aberrations (CNA from comparative genomic hybridization data would be improved by the availability of a generalised method for processing large datasets. To this end, we developed swatCGH, a data analysis framework and region detection heuristic for computational grids. swatCGH analyses sequentially displaced (sliding windows of neighbouring probes and applies adaptive thresholds of varying stringency to identify the 10% of each chromosome that contains the most frequently occurring CNAs. We used the method to analyse a published dataset, comparing data preprocessed using four different DNA segmentation algorithms, and two methods for prioritising the detected CNAs. The consolidated list of the most commonly detected aberrations confirmed the value of swatCGH as a simplified high-throughput method for identifying biologically significant CNA regions of interest.

  15. A real time sorting algorithm to time sort any deterministic time disordered data stream

    Science.gov (United States)

    Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.

    2017-12-01

    In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.

  16. Symbol-stream Combiner: Description and Demonstration Plans

    Science.gov (United States)

    Hurd, W. J.; Reder, L. J.; Russell, M. D.

    1984-01-01

    A system is described and demonstration plans presented for antenna arraying by symbol stream combining. This system is used to enhance the signal-to-noise ratio of a spacecraft signals by combining the detected symbol streams from two or more receiving stations. Symbol stream combining has both cost and performance advantages over other arraying methods. Demonstrations are planned on Voyager 2 both prior to and during Uranus encounter. Operational use is possible for interagency arraying of non-Deep Space Network stations at Neptune encounter.

  17. Cellular Subcompartments through Cytoplasmic Streaming.

    Science.gov (United States)

    Pieuchot, Laurent; Lai, Julian; Loh, Rachel Ann; Leong, Fong Yew; Chiam, Keng-Hwee; Stajich, Jason; Jedd, Gregory

    2015-08-24

    Cytoplasmic streaming occurs in diverse cell types, where it generally serves a transport function. Here, we examine streaming in multicellular fungal hyphae and identify an additional function wherein regimented streaming forms distinct cytoplasmic subcompartments. In the hypha, cytoplasm flows directionally from cell to cell through septal pores. Using live-cell imaging and computer simulations, we identify a flow pattern that produces vortices (eddies) on the upstream side of the septum. Nuclei can be immobilized in these microfluidic eddies, where they form multinucleate aggregates and accumulate foci of the HDA-2 histone deacetylase-associated factor, SPA-19. Pores experiencing flow degenerate in the absence of SPA-19, suggesting that eddy-trapped nuclei function to reinforce the septum. Together, our data show that eddies comprise a subcellular niche favoring nuclear differentiation and that subcompartments can be self-organized as a consequence of regimented cytoplasmic streaming. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Array capabilities and future arrays

    International Nuclear Information System (INIS)

    Radford, D.

    1993-01-01

    Early results from the new third-generation instruments GAMMASPHERE and EUROGAM are confirming the expectation that such arrays will have a revolutionary effect on the field of high-spin nuclear structure. When completed, GAMMASHPERE will have a resolving power am order of magnitude greater that of the best second-generation arrays. When combined with other instruments such as particle-detector arrays and fragment mass analysers, the capabilites of the arrays for the study of more exotic nuclei will be further enhanced. In order to better understand the limitations of these instruments, and to design improved future detector systems, it is important to have some intelligible and reliable calculation for the relative resolving power of different instrument designs. The derivation of such a figure of merit will be briefly presented, and the relative sensitivities of arrays currently proposed or under construction presented. The design of TRIGAM, a new third-generation array proposed for Chalk River, will also be discussed. It is instructive to consider how far arrays of Compton-suppressed Ge detectors could be taken. For example, it will be shown that an idealised open-quote perfectclose quotes third-generation array of 1000 detectors has a sensitivity an order of magnitude higher again than that of GAMMASPHERE. Less conventional options for new arrays will also be explored

  19. Galaxies with jet streams

    International Nuclear Information System (INIS)

    Breuer, R.

    1981-01-01

    Describes recent research work on supersonic gas flow. Notable examples have been observed in cosmic radio sources, where jet streams of galactic dimensions sometimes occur, apparently as the result of interaction between neighbouring galaxies. The current theory of jet behaviour has been convincingly demonstrated using computer simulation. The surprisingly long-term stability is related to the supersonic velocity, and is analagous to the way in which an Appollo spacecraft re-entering the atmosphere supersonically is protected by the gas from the burning shield. (G.F.F.)

  20. The LHCb Turbo stream

    Energy Technology Data Exchange (ETDEWEB)

    Puig, A., E-mail: albert.puig@cern.ch

    2016-07-11

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction. In the Turbo stream the trigger will write out a compact summary of physics objects containing all information necessary for analyses. This will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissioned and developed during 2015 with a selection of physics analyses. It is anticipated that the turbo stream will be adopted by an increasing number of analyses during the remainder of LHC Run II (2015–2018) and ultimately in Run III (starting in 2020) with the upgraded LHCb detector.

  1. SNP Arrays

    Directory of Open Access Journals (Sweden)

    Jari Louhelainen

    2016-10-01

    Full Text Available The papers published in this Special Issue “SNP arrays” (Single Nucleotide Polymorphism Arrays focus on several perspectives associated with arrays of this type. The range of papers vary from a case report to reviews, thereby targeting wider audiences working in this field. The research focus of SNP arrays is often human cancers but this Issue expands that focus to include areas such as rare conditions, animal breeding and bioinformatics tools. Given the limited scope, the spectrum of papers is nothing short of remarkable and even from a technical point of view these papers will contribute to the field at a general level. Three of the papers published in this Special Issue focus on the use of various SNP array approaches in the analysis of three different cancer types. Two of the papers concentrate on two very different rare conditions, applying the SNP arrays slightly differently. Finally, two other papers evaluate the use of the SNP arrays in the context of genetic analysis of livestock. The findings reported in these papers help to close gaps in the current literature and also to give guidelines for future applications of SNP arrays.

  2. Influence of ultrasound power on acoustic streaming and micro-bubbles formations in a low frequency sono-reactor: mathematical and 3D computational simulation.

    Science.gov (United States)

    Sajjadi, Baharak; Raman, Abdul Aziz Abdul; Ibrahim, Shaliza

    2015-05-01

    This paper aims at investigating the influence of ultrasound power amplitude on liquid behaviour in a low-frequency (24 kHz) sono-reactor. Three types of analysis were employed: (i) mechanical analysis of micro-bubbles formation and their activities/characteristics using mathematical modelling. (ii) Numerical analysis of acoustic streaming, fluid flow pattern, volume fraction of micro-bubbles and turbulence using 3D CFD simulation. (iii) Practical analysis of fluid flow pattern and acoustic streaming under ultrasound irradiation using Particle Image Velocimetry (PIV). In mathematical modelling, a lone micro bubble generated under power ultrasound irradiation was mechanistically analysed. Its characteristics were illustrated as a function of bubble radius, internal temperature and pressure (hot spot conditions) and oscillation (pulsation) velocity. The results showed that ultrasound power significantly affected the conditions of hotspots and bubbles oscillation velocity. From the CFD results, it was observed that the total volume of the micro-bubbles increased by about 4.95% with each 100 W-increase in power amplitude. Furthermore, velocity of acoustic streaming increased from 29 to 119 cm/s as power increased, which was in good agreement with the PIV analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Symbol Stream Combining in a Convolutionally Coded System

    Science.gov (United States)

    Mceliece, R. J.; Pollara, F.; Swanson, L.

    1985-01-01

    Symbol stream combining has been proposed as a method for arraying signals received at different antennas. If convolutional coding and Viterbi decoding are used, it is shown that a Viterbi decoder based on the proposed weighted sum of symbol streams yields maximum likelihood decisions.

  4. Stream Modelling

    DEFF Research Database (Denmark)

    Vestergaard, Kristian

    The development of the digital computer has been of great importance for the hydraulic engineer. Through many centuries hydraulic engineering was based on practical know-how, obtained through many hundred years experience. Gradually mathematical theories were introduced and accepted among the eng...

  5. electrode array

    African Journals Online (AJOL)

    PROF EKWUEME

    A geoelectric investigation employing vertical electrical soundings (VES) using the Ajayi - Makinde Two-Electrode array and the ... arrangements used in electrical D.C. resistivity survey. These include ..... Refraction Tomography to Study the.

  6. Pollutant transport in natural streams

    International Nuclear Information System (INIS)

    Buckner, M.R.; Hayes, D.W.

    1975-01-01

    A mathematical model has been developed to estimate the downstream effect of chemical and radioactive pollutant releases to tributary streams and rivers. The one-dimensional dispersion model was employed along with a dead zone model to describe stream transport behavior. Options are provided for sorption/desorption, ion exchange, and particle deposition in the river. The model equations are solved numerically by the LODIPS computer code. The solution method was verified by application to actual and simulated releases of radionuclides and other chemical pollutants. (U.S.)

  7. Filter arrays

    Science.gov (United States)

    Page, Ralph H.; Doty, Patrick F.

    2017-08-01

    The various technologies presented herein relate to a tiled filter array that can be used in connection with performance of spatial sampling of optical signals. The filter array comprises filter tiles, wherein a first plurality of filter tiles are formed from a first material, the first material being configured such that only photons having wavelengths in a first wavelength band pass therethrough. A second plurality of filter tiles is formed from a second material, the second material being configured such that only photons having wavelengths in a second wavelength band pass therethrough. The first plurality of filter tiles and the second plurality of filter tiles can be interspersed to form the filter array comprising an alternating arrangement of first filter tiles and second filter tiles.

  8. Streaming simplification of tetrahedral meshes.

    Science.gov (United States)

    Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T

    2007-01-01

    Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  10. The LHCb Turbo stream

    CERN Document Server

    AUTHOR|(CDS)2070171

    2016-01-01

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction. In the Turbo stream the trigger will write out a compact summary of physics objects containing all information necessary for analyses. This will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissioned and developed during 2015 wi...

  11. Checking for Circular Dependencies in Distributed Stream Programs

    Science.gov (United States)

    2011-08-29

    extensions to express new complexities more conve- nient. Teleport messaging ( TMG ) in the StreamIt language [30] is an example. 1.1 StreamIt Language...dynamicities to an FIR computation Thies et al. in [30] give a TMG model for distributed stream pro- grams. TMG is a mechanism that implements control...messages for stream graphs. The TMG mechanism is designed not to interfere with original dataflow graphs’ structures and scheduling, therefore a key

  12. Introduction to stream: An Extensible Framework for Data Stream Clustering Research with R

    Directory of Open Access Journals (Sweden)

    Michael Hahsler

    2017-02-01

    Full Text Available In recent years, data streams have become an increasingly important area of research for the computer science, database and statistics communities. Data streams are ordered and potentially unbounded sequences of data points created by a typically non-stationary data generating process. Common data mining tasks associated with data streams include clustering, classification and frequent pattern mining. New algorithms for these types of data are proposed regularly and it is important to evaluate them thoroughly under standardized conditions. In this paper we introduce stream, a research tool that includes modeling and simulating data streams as well as an extensible framework for implementing, interfacing and experimenting with algorithms for various data stream mining tasks. The main advantage of stream is that it seamlessly integrates with the large existing infrastructure provided by R. In addition to data handling, plotting and easy scripting capabilities, R also provides many existing algorithms and enables users to interface code written in many programming languages popular among data mining researchers (e.g., C/C++, Java and Python. In this paper we describe the architecture of stream and focus on its use for data stream clustering research. stream was implemented with extensibility in mind and will be extended in the future to cover additional data stream mining tasks like classification and frequent pattern mining.

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  14. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  15. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  16. Chunking of Large Multidimensional Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  18. Prediction of temperature and damage in an irradiated human eye-Utilization of a detailed computer model which includes a vectorial blood stream in the choroid.

    Science.gov (United States)

    Heussner, Nico; Holl, Lukas; Nowak, Timo; Beuth, Thorsten; Spitzer, Martin S; Stork, Wilhelm

    2014-08-01

    The work presented here describes the development and use of a three-dimensional thermo-dynamic model of the human eye for the prediction of temperatures and damage thresholds under irradiation. This model takes into account the blood flow by the implementation of a vectorial blood stream in the choroid and also uses the actual physiological extensions and tissue parameters of the eye. Furthermore it considers evaporation, radiation and convection at the cornea as well as the eye lid. The predicted temperatures were successfully validated against existing eye models in terms of corneal and global thermal behaviour. The model׳s predictions were additionally checked for consistency with in-vivo temperature measurements of the cornea, the irradiated retina and its damage thresholds. These thresholds were calculated from the retinal temperatures using the Arrhenius integral. Hence the model can be used to predict the temperature increase and irradiation hazard within the human eye as long as the absorption values and the Arrhenius coefficients are known and the damage mechanism is in the thermal regime. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Streaming potential of superhydrophobic microchannels.

    Science.gov (United States)

    Park, Hung Mok; Kim, Damoa; Kim, Se Young

    2017-03-01

    For the purpose of gaining larger streaming potential, it has been suggested to employ superhydrophobic microchannels with a large velocity slip. There are two kinds of superhydrophobic surfaces, one having a smooth wall with a large Navier slip coefficient caused by the hydrophobicity of the wall material, and the other having a periodic array of no- shear slots of air pockets embedded in a nonslip wall. The electrokinetic flows over these two superhydrophobic surfaces are modelled using the Navier-Stokes equation and convection-diffusion equations of the ionic species. The Navier slip coefficient of the first kind surfaces and the no-shear slot ratio of the second kind surfaces are similar in the sense that the volumetric flow rate increases as these parameter values increase. However, although the streaming potential increases monotonically with respect to the Navier slip coefficient, it reaches a maximum and afterward decreases as the no-shear ratio increases. The results of the present investigation imply that the characterization of superhydrophobic surfaces employing only the measurement of volumetric flow rate against pressure drop is not appropriate and the fine structure of the superhydrophobic surfaces must be verified before predicting the streaming potential and electrokinetic flows accurately. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Tomographic array

    International Nuclear Information System (INIS)

    1976-01-01

    The configuration of a tomographic array in which the object can rotate about its axis is described. The X-ray detector is a cylindrical screen perpendicular to the axis of rotation. The X-ray source has a line-shaped focus coinciding with the axis of rotation. The beam is fan-shaped with one side of this fan lying along the axis of rotation. The detector screen is placed inside an X-ray image multiplier tube

  1. Tomographic array

    International Nuclear Information System (INIS)

    1976-01-01

    A tomographic array with the following characteristics is described. An X-ray screen serving as detector is placed before a photomultiplier tube which itself is placed in front of a television camera connected to a set of image processors. The detector is concave towards the source and is replacable. Different images of the object are obtained simultaneously. Optical fibers and lenses are used for transmission within the system

  2. StreamCat

    Data.gov (United States)

    U.S. Environmental Protection Agency — The StreamCat Dataset provides summaries of natural and anthropogenic landscape features for ~2.65 million streams, and their associated catchments, within the...

  3. Prioritized Contact Transport Stream

    Science.gov (United States)

    Hunt, Walter Lee, Jr. (Inventor)

    2015-01-01

    A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters.

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  6. Productivity of Stream Definitions

    NARCIS (Netherlands)

    Endrullis, Jörg; Grabmayer, Clemens; Hendriks, Dimitri; Isihara, Ariya; Klop, Jan

    2007-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continuously in such a way that a uniquely determined stream is obtained as the limit. Whereas productivity is undecidable

  7. Productivity of stream definitions

    NARCIS (Netherlands)

    Endrullis, J.; Grabmayer, C.A.; Hendriks, D.; Isihara, A.; Klop, J.W.

    2008-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continually in such a way that a uniquely determined stream in constructor normal form is obtained as the limit. Whereas

  8. Real-time change detection in data streams with FPGAs

    International Nuclear Information System (INIS)

    Vega, J.; Dormido-Canto, S.; Cruz, T.; Ruiz, M.; Barrera, E.; Castro, R.; Murari, A.; Ochando, M.

    2014-01-01

    Highlights: • Automatic recognition of changes in data streams of multidimensional signals. • Detection algorithm based on testing exchangeability on-line. • Real-time and off-line applicability. • Real-time implementation in FPGAs. - Abstract: The automatic recognition of changes in data streams is useful in both real-time and off-line data analyses. This article shows several effective change-detecting algorithms (based on martingales) and describes their real-time applicability in the data acquisition systems through the use of Field Programmable Gate Arrays (FPGA). The automatic event recognition system is absolutely general and it does not depend on either the particular event to detect or the specific data representation (waveforms, images or multidimensional signals). The developed approach provides good results for change detection in both the temporal evolution of profiles and the two-dimensional spatial distribution of volume emission intensity. The average computation time in the FPGA is 210 μs per profile

  9. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  15. Smart time-pulse coding photoconverters as basic components 2D-array logic devices for advanced neural networks and optical computers

    Science.gov (United States)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Michalnichenko, Nikolay N.

    2004-04-01

    The article deals with a conception of building arithmetic-logic devices (ALD) with a 2D-structure and optical 2D-array inputs-outputs as advanced high-productivity parallel basic operational training modules for realization of basic operation of continuous, neuro-fuzzy, multilevel, threshold and others logics and vector-matrix, vector-tensor procedures in neural networks, that consists in use of time-pulse coding (TPC) architecture and 2D-array smart optoelectronic pulse-width (or pulse-phase) modulators (PWM or PPM) for transformation of input pictures. The input grayscale image is transformed into a group of corresponding short optical pulses or time positions of optical two-level signal swing. We consider optoelectronic implementations of universal (quasi-universal) picture element of two-valued ALD, multi-valued ALD, analog-to-digital converters, multilevel threshold discriminators and we show that 2D-array time-pulse photoconverters are the base elements for these devices. We show simulation results of the time-pulse photoconverters as base components. Considered devices have technical parameters: input optical signals power is 200nW_200μW (if photodiode responsivity is 0.5A/W), conversion time is from tens of microseconds to a millisecond, supply voltage is 1.5_15V, consumption power is from tens of microwatts to a milliwatt, conversion nonlinearity is less than 1%. One cell consists of 2-3 photodiodes and about ten CMOS transistors. This simplicity of the cells allows to carry out their integration in arrays of 32x32, 64x64 elements and more.

  16. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  17. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  19. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  20. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  3. SAQC: SNP Array Quality Control

    Directory of Open Access Journals (Sweden)

    Li Ling-Hui

    2011-04-01

    Full Text Available Abstract Background Genome-wide single-nucleotide polymorphism (SNP arrays containing hundreds of thousands of SNPs from the human genome have proven useful for studying important human genome questions. Data quality of SNP arrays plays a key role in the accuracy and precision of downstream data analyses. However, good indices for assessing data quality of SNP arrays have not yet been developed. Results We developed new quality indices to measure the quality of SNP arrays and/or DNA samples and investigated their statistical properties. The indices quantify a departure of estimated individual-level allele frequencies (AFs from expected frequencies via standardized distances. The proposed quality indices followed lognormal distributions in several large genomic studies that we empirically evaluated. AF reference data and quality index reference data for different SNP array platforms were established based on samples from various reference populations. Furthermore, a confidence interval method based on the underlying empirical distributions of quality indices was developed to identify poor-quality SNP arrays and/or DNA samples. Analyses of authentic biological data and simulated data show that this new method is sensitive and specific for the detection of poor-quality SNP arrays and/or DNA samples. Conclusions This study introduces new quality indices, establishes references for AFs and quality indices, and develops a detection method for poor-quality SNP arrays and/or DNA samples. We have developed a new computer program that utilizes these methods called SNP Array Quality Control (SAQC. SAQC software is written in R and R-GUI and was developed as a user-friendly tool for the visualization and evaluation of data quality of genome-wide SNP arrays. The program is available online (http://www.stat.sinica.edu.tw/hsinchou/genetics/quality/SAQC.htm.

  4. CCD and IR array controllers

    Science.gov (United States)

    Leach, Robert W.; Low, Frank J.

    2000-08-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation at low background/low noise regimes as well as high background/high speed regimes. The CCD and IR controllers have a common digital core based on user- programmable digital signal processors that are used to generate the array clocking and signal processing signals customized for each application. A fiber optic link passes image data and commands to VME or PCI interface boards resident in a host computer to the controller. CCD signal processing is done with a dual slope integrator operating at speeds of up to one Megapixel per second per channel. Signal processing of IR arrays is done either with a dual channel video processor or a four channel video processor that has built-in image memory and a coadder to 32-bit precision for operating high background arrays. Recent developments underway include the implementation of a fast fiber optic data link operating at a speed of 12.5 Megapixels per second for fast image transfer from the controller to the host computer, and supporting image acquisition software and device drivers for the PCI interface board for the Sun Solaris, Linux and Windows 2000 operating systems.

  5. Solar wind stream interfaces

    International Nuclear Information System (INIS)

    Gosling, J.T.; Asbridge, J.R.; Bame, S.J.; Feldman, W.C.

    1978-01-01

    Measurements aboard Imp 6, 7, and 8 reveal that approximately one third of all high-speed solar wind streams observed at 1 AU contain a sharp boundary (of thickness less than approx.4 x 10 4 km) near their leading edge, called a stream interface, which separates plasma of distinctly different properties and origins. Identified as discontinuities across which the density drops abruptly, the proton temperature increases abruptly, and the speed rises, stream interfaces are remarkably similar in character from one stream to the next. A superposed epoch analysis of plasma data has been performed for 23 discontinuous stream interfaces observed during the interval March 1971 through August 1974. Among the results of this analysis are the following: (1) a stream interface separates what was originally thick (i.e., dense) slow gas from what was originally thin (i.e., rare) fast gas; (2) the interface is the site of a discontinuous shear in the solar wind flow in a frame of reference corotating with the sun; (3) stream interfaces occur at speeds less than 450 km s - 1 and close to or at the maximum of the pressure ridge at the leading edges of high-speed streams; (4) a discontinuous rise by approx.40% in electron temperature occurs at the interface; and (5) discontinuous changes (usually rises) in alpha particle abundance and flow speed relative to the protons occur at the interface. Stream interfaces do not generally recur on successive solar rotations, even though the streams in which they are embedded often do. At distances beyond several astronomical units, stream interfaces should be bounded by forward-reverse shock pairs; three of four reverse shocks observed at 1 AU during 1971--1974 were preceded within approx.1 day by stream interfaces. Our observations suggest that many streams close to the sun are bounded on all sides by large radial velocity shears separating rapidly expanding plasma from more slowly expanding plasma

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  7. Ultrasound-driven Viscous Streaming, Modelled via Momentum Injection

    Directory of Open Access Journals (Sweden)

    James PACKER

    2008-12-01

    Full Text Available Microfluidic devices can use steady streaming caused by the ultrasonic oscillation of one or many gas bubbles in a liquid to drive small scale flow. Such streaming flows are difficult to evaluate, as analytic solutions are not available for any but the simplest cases, and direct computational fluid dynamics models are unsatisfactory due to the large difference in flow velocity between the steady streaming and the leading order oscillatory motion. We develop a numerical technique which uses a two-stage multiscale computational fluid dynamics approach to find the streaming flow as a steady problem, and validate this model against experimental results.

  8. STREAM: A First Programming Process

    DEFF Research Database (Denmark)

    Caspersen, Michael Edelgaard; Kölling, Michael

    2009-01-01

    to derive a programming process, STREAM, designed specifically for novices. STREAM is a carefully down-scaled version of a full and rich agile software engineering process particularly suited for novices learning object-oriented programming. In using it we hope to achieve two things: to help novice......Programming is recognized as one of seven grand challenges in computing education. Decades of research have shown that the major problems novices experience are composition-based—they may know what the individual programming language constructs are, but they do not know how to put them together....... Despite this fact, textbooks, educational practice, and programming education research hardly address the issue of teaching the skills needed for systematic development of programs. We provide a conceptual framework for incremental program development, called Stepwise Improvement, which unifies best...

  9. On-Line Testing and Reconfiguration of Field Programmable Gate Arrays (FPGAs) for Fault-Tolerant (FT) Applications in Adaptive Computing Systems (ACS)

    National Research Council Canada - National Science Library

    Abramovici, Miron

    2002-01-01

    Adaptive computing systems (ACS) rely on reconfigurable hardware to adapt the system operation to changes in the external environment, and to extend mission capability by implementing new functions on the same hardware platform...

  10. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  12. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  13. Inventory of miscellaneous streams

    International Nuclear Information System (INIS)

    Lueck, K.J.

    1995-09-01

    On December 23, 1991, the US Department of Energy, Richland Operations Office (RL) and the Washington State Department of Ecology (Ecology) agreed to adhere to the provisions of the Department of Ecology Consent Order. The Consent Order lists the regulatory milestones for liquid effluent streams at the Hanford Site to comply with the permitting requirements of Washington Administrative Code. The RL provided the US Congress a Plan and Schedule to discontinue disposal of contaminated liquid effluent into the soil column on the Hanford Site. The plan and schedule document contained a strategy for the implementation of alternative treatment and disposal systems. This strategy included prioritizing the streams into two phases. The Phase 1 streams were considered to be higher priority than the Phase 2 streams. The actions recommended for the Phase 1 and 2 streams in the two reports were incorporated in the Hanford Federal Facility Agreement and Consent Order. Miscellaneous Streams are those liquid effluents streams identified within the Consent Order that are discharged to the ground but are not categorized as Phase 1 or Phase 2 Streams. This document consists of an inventory of the liquid effluent streams being discharged into the Hanford soil column

  14. Hydrography - Streams and Shorelines

    Data.gov (United States)

    California Natural Resource Agency — The hydrography layer consists of flowing waters (rivers and streams), standing waters (lakes and ponds), and wetlands -- both natural and manmade. Two separate...

  15. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  16. Interactive collision detection for deformable models using streaming AABBs.

    Science.gov (United States)

    Zhang, Xinyu; Kim, Young J

    2007-01-01

    We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB

  17. SCORE-EVET: a computer code for the multidimensional transient thermal-hydraulic analysis of nuclear fuel rod arrays. [BWR; PWR

    Energy Technology Data Exchange (ETDEWEB)

    Benedetti, R. L.; Lords, L. V.; Kiser, D. M.

    1978-02-01

    The SCORE-EVET code was developed to study multidimensional transient fluid flow in nuclear reactor fuel rod arrays. The conservation equations used were derived by volume averaging the transient compressible three-dimensional local continuum equations in Cartesian coordinates. No assumptions associated with subchannel flow have been incorporated into the derivation of the conservation equations. In addition to the three-dimensional fluid flow equations, the SCORE-EVET code ocntains: (a) a one-dimensional steady state solution scheme to initialize the flow field, (b) steady state and transient fuel rod conduction models, and (c) comprehensive correlation packages to describe fluid-to-fuel rod interfacial energy and momentum exchange. Velocity and pressure boundary conditions can be specified as a function of time and space to model reactor transient conditions such as a hypothesized loss-of-coolant accident (LOCA) or flow blockage.

  18. Adaptive motion compensation in sonar array processing

    NARCIS (Netherlands)

    Groen, J.

    2006-01-01

    In recent years, sonar performance has mainly improved via a significant increase in array ap-erture, signal bandwidth and computational power. This thesis aims at improving sonar array processing techniques based on these three steps forward. In applications such as anti-submarine warfare and mine

  19. X-ray focusing using capillary arrays

    International Nuclear Information System (INIS)

    Nugent, K.A.; Chapman, H.N.

    1990-01-01

    A new form of X-ray focusing device based on glass capillary arrays is presented. Theoretical and experimental results for array of circular capillaries and theoretical and computational results for square hole capillaries are given. It is envisaged that devices such as these will find wide applications in X-ray optics as achromatic condensers and collimators. 3 refs., 4 figs

  20. LHCb trigger streams optimization

    Science.gov (United States)

    Derkach, D.; Kazeev, N.; Neychev, R.; Panin, A.; Trofimov, I.; Ustyuzhanin, A.; Vesterinen, M.

    2017-10-01

    The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.

  1. StreamStats, version 4

    Science.gov (United States)

    Ries, Kernell G.; Newson, Jeremy K.; Smith, Martyn J.; Guthrie, John D.; Steeves, Peter A.; Haluska, Tana L.; Kolb, Katharine R.; Thompson, Ryan F.; Santoro, Richard D.; Vraga, Hans W.

    2017-10-30

    IntroductionStreamStats version 4, available at https://streamstats.usgs.gov, is a map-based web application that provides an assortment of analytical tools that are useful for water-resources planning and management, and engineering purposes. Developed by the U.S. Geological Survey (USGS), the primary purpose of StreamStats is to provide estimates of streamflow statistics for user-selected ungaged sites on streams and for USGS streamgages, which are locations where streamflow data are collected.Streamflow statistics, such as the 1-percent flood, the mean flow, and the 7-day 10-year low flow, are used by engineers, land managers, biologists, and many others to help guide decisions in their everyday work. For example, estimates of the 1-percent flood (which is exceeded, on average, once in 100 years and has a 1-percent chance of exceedance in any year) are used to create flood-plain maps that form the basis for setting insurance rates and land-use zoning. This and other streamflow statistics also are used for dam, bridge, and culvert design; water-supply planning and management; permitting of water withdrawals and wastewater and industrial discharges; hydropower facility design and regulation; and setting of minimum allowed streamflows to protect freshwater ecosystems. Streamflow statistics can be computed from available data at USGS streamgages depending on the type of data collected at the stations. Most often, however, streamflow statistics are needed at ungaged sites, where no streamflow data are available to determine the statistics.

  2. Explaining the "Pulse of Protoplasm": the search for molecular mechanisms of protoplasmic streaming.

    Science.gov (United States)

    Dietrich, Michael R

    2015-01-01

    Explanations for protoplasmic streaming began with appeals to contraction in the eighteenth century and ended with appeals to contraction in the twentieth. During the intervening years, biologists proposed a diverse array of mechanisms for streaming motions. This paper focuses on the re-emergence of contraction among the molecular mechanisms proposed for protoplasmic streaming during the twentieth century. The revival of contraction is a result of a broader transition from colloidal chemistry to a macromolecular approach to the chemistry of proteins, the recognition of the phenomena of shuttle streaming and the pulse of protoplasm, and the influential analogy between protoplasmic streaming and muscle contraction. © 2014 Institute of Botany, Chinese Academy of Sciences.

  3. Flexible eddy current coil arrays

    International Nuclear Information System (INIS)

    Krampfner, Y.; Johnson, D.P.

    1987-01-01

    A novel approach was devised to overcome certain limitations of conventional eddy current testing. The typical single-element hand-wound probe was replaced with a two dimensional array of spirally wound probe elements deposited on a thin, flexible polyimide substrate. This provides full and reliable coverage of the test area and eliminates the need for scanning. The flexible substrate construction of the array allows the probes to conform to irregular part geometries, such as turbine blades and tubing, thereby eliminating the need for specialized probes for each geometry. Additionally, the batch manufacturing process of the array can yield highly uniform and reproducible coil geometries. The array is driven by a portable computer-based eddy current instrument, smartEDDY/sup TM/, capable of two-frequency operation, and offers a great deal of versatility and flexibility due to its software-based architecture. The array is coupled to the instrument via an 80-switch multiplexer that can be configured to address up to 1600 probes. The individual array elements may be addressed in any desired sequence, as defined by the software

  4. Asteroid/meteorite streams

    Science.gov (United States)

    Drummond, J.

    The independent discovery of the same three streams (named alpha, beta, and gamma) among 139 Earth approaching asteroids and among 89 meteorite producing fireballs presents the possibility of matching specific meteorites to specific asteroids, or at least to asteroids in the same stream and, therefore, presumably of the same composition. Although perhaps of limited practical value, the three meteorites with known orbits are all ordinary chondrites. To identify, in general, the taxonomic type of the parent asteroid, however, would be of great scientific interest since these most abundant meteorite types cannot be unambiguously spectrally matched to an asteroid type. The H5 Pribram meteorite and asteroid 4486 (unclassified) are not part of a stream, but travel in fairly similar orbits. The LL5 Innisfree meteorite is orbitally similar to asteroid 1989DA (unclassified), and both are members of a fourth stream (delta) defined by five meteorite-dropping fireballs and this one asteroid. The H5 Lost City meteorite is orbitally similar to 1980AA (S type), which is a member of stream gamma defined by four asteroids and four fireballs. Another asteroid in this stream is classified as an S type, another is QU, and the fourth is unclassified. This stream suggests that ordinary chondrites should be associated with S (and/or Q) asteroids. Two of the known four V type asteroids belong to another stream, beta, defined by five asteroids and four meteorite-dropping (but unrecovered) fireballs, making it the most probable source of the eucrites. The final stream, alpha, defined by five asteroids and three fireballs is of unknown composition since no meteorites have been recovered and only one asteroid has an ambiguous classification of QRS. If this stream, or any other as yet undiscovered ones, were found to be composed of a more practical material (e.g., water or metalrich), then recovery of the associated meteorites would provide an opportunity for in-hand analysis of a potential

  5. Coupling in reflector arrays

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1968-01-01

    In order to reduce the space occupied by a reflector array, it is desirable to arrange the array antennas as close to each other as possible; however, in this case coupling between the array antennas will reduce the reflecting properties of the reflector array. The purpose of the present communic......In order to reduce the space occupied by a reflector array, it is desirable to arrange the array antennas as close to each other as possible; however, in this case coupling between the array antennas will reduce the reflecting properties of the reflector array. The purpose of the present...

  6. Application of the Hydroecological Integrity Assessment Process for Missouri Streams

    Science.gov (United States)

    Kennen, Jonathan G.; Henriksen, James A.; Heasley, John; Cade, Brian S.; Terrell, James W.

    2009-01-01

    Natural flow regime concepts and theories have established the justification for maintaining or restoring the range of natural hydrologic variability so that physiochemical processes, native biodiversity, and the evolutionary potential of aquatic and riparian assemblages can be sustained. A synthesis of recent research advances in hydroecology, coupled with stream classification using hydroecologically relevant indices, has produced the Hydroecological Integrity Assessment Process (HIP). HIP consists of (1) a regional classification of streams into hydrologic stream types based on flow data from long-term gaging-station records for relatively unmodified streams, (2) an identification of stream-type specific indices that address 11 subcomponents of the flow regime, (3) an ability to establish environmental flow standards, (4) an evaluation of hydrologic alteration, and (5) a capacity to conduct alternative analyses. The process starts with the identification of a hydrologic baseline (reference condition) for selected locations, uses flow data from a stream-gage network, and proceeds to classify streams into hydrologic stream types. Concurrently, the analysis identifies a set of non-redundant and ecologically relevant hydrologic indices for 11 subcomponents of flow for each stream type. Furthermore, regional hydrologic models for synthesizing flow conditions across a region and the development of flow-ecology response relations for each stream type can be added to further enhance the process. The application of HIP to Missouri streams identified five stream types ((1) intermittent, (2) perennial runoff-flashy, (3) perennial runoff-moderate baseflow, (4) perennial groundwater-stable, and (5) perennial groundwater-super stable). Two Missouri-specific computer software programs were developed: (1) a Missouri Hydrologic Assessment Tool (MOHAT) which is used to establish a hydrologic baseline, provide options for setting environmental flow standards, and compare past and

  7. Percent Forest Adjacent to Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  8. Percent Agriculture Adjacent to Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  9. Delivering Instruction via Streaming Media: A Higher Education Perspective.

    Science.gov (United States)

    Mortensen, Mark; Schlieve, Paul; Young, Jon

    2000-01-01

    Describes streaming media, an audio/video presentation that is delivered across a network so that it is viewed while being downloaded onto the user's computer, including a continuous stream of video that can be pre-recorded or live. Discusses its use for nontraditional students in higher education and reports on implementation experiences. (LRW)

  10. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    Science.gov (United States)

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  11. The FPGA Pixel Array Detector

    International Nuclear Information System (INIS)

    Hromalik, Marianne S.; Green, Katherine S.; Philipp, Hugh T.; Tate, Mark W.; Gruner, Sol M.

    2013-01-01

    A proposed design for a reconfigurable x-ray Pixel Array Detector (PAD) is described. It operates by integrating a high-end commercial field programmable gate array (FPGA) into a 3-layer device along with a high-resistivity diode detection layer and a custom, application-specific integrated circuit (ASIC) layer. The ASIC layer contains an energy-discriminating photon-counting front end with photon hits streamed directly to the FPGA via a massively parallel, high-speed data connection. FPGA resources can be allocated to perform user defined tasks on the pixel data streams, including the implementation of a direct time autocorrelation function (ACF) with time resolution down to 100 ns. Using the FPGA at the front end to calculate the ACF reduces the required data transfer rate by several orders of magnitude when compared to a fast framing detector. The FPGA-ASIC high-speed interface, as well as the in-FPGA implementation of a real-time ACF for x-ray photon correlation spectroscopy experiments has been designed and simulated. A 16×16 pixel prototype of the ASIC has been fabricated and is being tested. -- Highlights: ► We describe the novelty and need for the FPGA Pixel Array Detector. ► We describe the specifications and design of the Diode, ASIC and FPGA layers. ► We highlight the Autocorrelation Function (ACF) for speckle as an example application. ► Simulated FPGA output calculates the ACF for different input bitstreams to 100 ns. ► Reduced data transfer rate by 640× and sped up real-time ACF by 100× other methods.

  12. Wadeable Streams Assessment Data

    Science.gov (United States)

    The Wadeable Streams Assessment (WSA) is a first-ever statistically-valid survey of the biological condition of small streams throughout the U.S. The U.S. Environmental Protection Agency (EPA) worked with the states to conduct the assessment in 2004-2005. Data for each parameter sampled in the Wadeable Streams Assessment (WSA) are available for downloading in a series of files as comma separated values (*.csv). Each *.csv data file has a companion text file (*.txt) that lists a dataset label and individual descriptions for each variable. Users should view the *.txt files first to help guide their understanding and use of the data.

  13. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  14. Vectorization, parallelization and implementation of nuclear codes =MVP/GMVP, QMDRELP, EQMD, HSABC, CURBAL, STREAM V3.1, TOSCA, EDDYCAL, RELAP5/MOD2/C36-05, RELAP5/MOD3= on the VPP500 computer system. Progress report 1995 fiscal year

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki; Watanabe, Hideo; Fujita, Toyozo [Fujitsu Ltd., Tokyo (Japan); Kawai, Wataru; Harada, Hiroo; Gorai, Kazuo; Yamasaki, Kazuhiko; Shoji, Makoto; Fujii, Minoru

    1996-06-01

    At Center for Promotion of Computational Science and Engineering, time consuming eight nuclear codes suggested by users have been vectorized, parallelized on the VPP500 computer system. In addition, two nuclear codes used on the VP2600 computer system were implemented on the VPP500 computer system. Neutron and photon transport calculation code MVP/GMVP and relativistic quantum molecular dynamics code QMDRELP have been parallelized. Extended quantum molecular dynamics code EQMD and adiabatic base calculation code HSABC have been parallelized and vectorized. Ballooning turbulence simulation code CURBAL, 3-D non-stationary compressible fluid dynamics code STREAM V3.1, operating plasma analysis code TOSCA and eddy current analysis code EDDYCAL have been vectorized. Reactor safety analysis code RELAP5/MOD2/C36-05 and RELAP5/MOD3 were implemented on the VPP500 computer system. (author)

  15. Vectorization, parallelization and implementation of nuclear codes [MVP/GMVP, QMDRELP, EQMD, HSABC, CURBAL, STREAM V3.1, TOSCA, EDDYCAL, RELAP5/MOD2/C36-05, RELAP5/MOD3] on the VPP500 computer system. Progress report 1995 fiscal year

    International Nuclear Information System (INIS)

    Nemoto, Toshiyuki; Watanabe, Hideo; Fujita, Toyozo; Kawai, Wataru; Harada, Hiroo; Gorai, Kazuo; Yamasaki, Kazuhiko; Shoji, Makoto; Fujii, Minoru.

    1996-07-01

    At Center for Promotion of Computational Science and Engineering, time consuming eight nuclear codes suggested by users have been vectorized, parallelized on the VPP500 computer system. In addition, two nuclear codes used on the VP2600 computer system were implemented on the VPP500 computer system. Neutron and photon transport calculation code MVP/GMVP and relativistic quantum molecular dynamics code QMDRELP have been parallelized. Extended quantum molecular dynamics code EQMD and adiabatic base calculation code HSABC have been parallelized and vectorized. Ballooning turbulence simulation code CURBAL, 3-D non-stationary compressible fluid dynamics code STREAM V3.1, operating plasma analysis code TOSCA and eddy current analysis code EDDYCAL have been vectorized. Reactor safety analysis code RELAP5/MOD2/C36-05 and RELAP5/MOD3 were implemented on the VPP500 computer system. (author)

  16. Future Roads Near Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — Roads are a source of auto related pollutants (e.g. gasoline, oil and other engine fluids). When roads are near streams, rain can wash these pollutants directly into...

  17. Channelized Streams in Iowa

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — This draft dataset consists of all ditches or channelized pieces of stream that could be identified using three input datasets; namely the1:24,000 National...

  18. Stochastic ice stream dynamics.

    Science.gov (United States)

    Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca

    2016-08-09

    Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution.

  19. Roads Near Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — Roads are a source of auto related pollutants (e.g. gasoline, oil and other engine fluids). When roads are near streams, rain can wash these pollutants directly into...

  20. Streaming tearing mode

    Science.gov (United States)

    Shigeta, M.; Sato, T.; Dasgupta, B.

    1985-01-01

    The magnetohydrodynamic stability of streaming tearing mode is investigated numerically. A bulk plasma flow parallel to the antiparallel magnetic field lines and localized in the neutral sheet excites a streaming tearing mode more strongly than the usual tearing mode, particularly for the wavelength of the order of the neutral sheet width (or smaller), which is stable for the usual tearing mode. Interestingly, examination of the eigenfunctions of the velocity perturbation and the magnetic field perturbation indicates that the streaming tearing mode carries more energy in terms of the kinetic energy rather than the magnetic energy. This suggests that the streaming tearing mode instability can be a more feasible mechanism of plasma acceleration than the usual tearing mode instability.

  1. DNR 24K Streams

    Data.gov (United States)

    Minnesota Department of Natural Resources — 1:24,000 scale streams captured from USGS seven and one-half minute quadrangle maps, with perennial vs. intermittent classification, and connectivity through lakes,...

  2. Trout Stream Special Regulations

    Data.gov (United States)

    Minnesota Department of Natural Resources — This layer shows Minnesota trout streams that have a special regulation as described in the 2006 Minnesota Fishing Regulations. Road crossings were determined using...

  3. Scientific stream pollution analysis

    National Research Council Canada - National Science Library

    Nemerow, Nelson Leonard

    1974-01-01

    A comprehensive description of the analysis of water pollution that presents a careful balance of the biological,hydrological, chemical and mathematical concepts involved in the evaluation of stream...

  4. Collaborative Media Streaming

    OpenAIRE

    Kahmann, Verena

    2008-01-01

    Mit Hilfe der IP-Technologie erbrachte Multimedia-Dienste wie IPTV oder Video-on-Demand sind zur Zeit ein gefragtes Thema. Technisch werden solche Dienste unter dem Begriff "Streaming" eingeordnet. Ein Server sendet Mediendaten kontinuierlich an Empfänger, welche die Daten sofort weiterverarbeiten und anzeigen. Über einen Rückkanal hat der Kunde die Möglichkeit der Einflussnahme auf die Wiedergabe. Eine Weiterentwicklung dieser Streaming-Dienste ist die Möglichkeit, gemeinsam mit anderen dens...

  5. Computer-aided method for identification of major flavone/flavonol glycosides by high-performance liquid chromatography-diode array detection-tandem mass spectrometry (HPLC-DAD-MS/MS).

    Science.gov (United States)

    Wang, Zhengfang; Lin, Longze; Harnly, James M; Harrington, Peter de B; Chen, Pei

    2014-11-01

    A new computational tool is proposed here for tentatively identifying major (UV quantifiable) flavone/flavonol glycoside peaks of high performance liquid chromatogram (HPLC)-diode array detection (DAD)-tandem mass spectrometry (MS/MS) profiles based on a MATLAB-based script implementing an in-house algorithm. The HPLC-DAD-MS/MS profiles of red onion, Chinese lettuce, carrot leaf, and celery seed extracts were analyzed by the proposed computer-aided screening method for identifying possible flavone/flavonol glycoside peaks from the HPLC-UV and MS total ion current (TIC) chromatograms. The number of identified flavone/flavonol glycoside peaks of the HPLC-UV chromatograms is four, four, six, and nine for red onion, Chinese lettuce, carrot leaf, and celery seed, respectively. These results have been validated by human(s) experts. For the batch processing of nine HPLC-DAD-MS/MS profiles of celery seed extract, the entire script execution time was within 15 s while manual calculation of only one HPLC-DAD-MS/MS profile by a flavonoid expert could take hours. Therefore, this MATLAB-based screening method is able to facilitate the HPLC-DAD-MS/MS analysis of flavone/flavonol glycosides in plants to a large extent.

  6. S/sub N/ computational benchmark solutions for slab geometry models of a gas-cooled fast reactor (GCFR) lattice cell

    International Nuclear Information System (INIS)

    McCoy, D.R.

    1981-01-01

    S/sub N/ computational benchmark solutions are generated for a onegroup and multigroup fuel-void slab lattice cell which is a rough model of a gas-cooled fast reactor (GCFR) lattice cell. The reactivity induced by the extrusion of the fuel material into the voided region is determined for a series of partially extruded lattice cell configurations. A special modified Gauss S/sub N/ ordinate array design is developed in order to obtain eigenvalues with errors less than 0.03% in all of the configurations that are considered. The modified Gauss S/sub N/ ordinate array design has a substantially improved eigenvalue angular convergence behavior when compared to existing S/sub N/ ordinate array designs used in neutron streaming applications. The angular refinement computations are performed in some cases by using a perturbation theory method which enables one to obtain high order S/sub N/ eigenvalue estimates for greatly reduced computational costs

  7. Fast Streaming 3D Level set Segmentation on the GPU for Smooth Multi-phase Segmentation

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2011-01-01

    Level set method based segmentation provides an efficient tool for topological and geometrical shape handling, but it is slow due to high computational burden. In this work, we provide a framework for streaming computations on large volumetric images on the GPU. A streaming computational model...

  8. Streaming Pool: reuse, combine and create reactive streams with pleasure

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    When connecting together heterogeneous and complex systems, it is not easy to exchange data between components. Streams of data are successfully used in industry in order to overcome this problem, especially in the case of "live" data. Streams are a specialization of the Observer design pattern and they provide asynchronous and non-blocking data flow. The ongoing effort of the ReactiveX initiative is one example that demonstrates how demanding this technology is even for big companies. Bridging the discrepancies of different technologies with common interfaces is already done by the Reactive Streams initiative and, in the JVM world, via reactive-streams-jvm interfaces. Streaming Pool is a framework for providing and discovering reactive streams. Through the mechanism of dependency injection provided by the Spring Framework, Streaming Pool provides a so called Discovery Service. This object can discover and chain streams of data that are technologically agnostic, through the use of Stream IDs. The stream to ...

  9. Shielding in ungated field emitter arrays

    Energy Technology Data Exchange (ETDEWEB)

    Harris, J. R. [U.S. Navy Reserve, Navy Operational Support Center New Orleans, New Orleans, Louisiana 70143 (United States); Jensen, K. L. [Code 6854, Naval Research Laboratory, Washington, D.C. 20375 (United States); Shiffler, D. A. [Directed Energy Directorate, Air Force Research Laboratory, Albuquerque, New Mexico 87117 (United States); Petillo, J. J. [Leidos, Billerica, Massachusetts 01821 (United States)

    2015-05-18

    Cathodes consisting of arrays of high aspect ratio field emitters are of great interest as sources of electron beams for vacuum electronic devices. The desire for high currents and current densities drives the cathode designer towards a denser array, but for ungated emitters, denser arrays also lead to increased shielding, in which the field enhancement factor β of each emitter is reduced due to the presence of the other emitters in the array. To facilitate the study of these arrays, we have developed a method for modeling high aspect ratio emitters using tapered dipole line charges. This method can be used to investigate proximity effects from similar emitters an arbitrary distance away and is much less computationally demanding than competing simulation approaches. Here, we introduce this method and use it to study shielding as a function of array geometry. Emitters with aspect ratios of 10{sup 2}–10{sup 4} are modeled, and the shielding-induced reduction in β is considered as a function of tip-to-tip spacing for emitter pairs and for large arrays with triangular and square unit cells. Shielding is found to be negligible when the emitter spacing is greater than the emitter height for the two-emitter array, or about 2.5 times the emitter height in the large arrays, in agreement with previously published results. Because the onset of shielding occurs at virtually the same emitter spacing in the square and triangular arrays, the triangular array is preferred for its higher emitter density at a given emitter spacing. The primary contribution to shielding in large arrays is found to come from emitters within a distance of three times the unit cell spacing for both square and triangular arrays.

  10. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  11. Alignment data streams for the ATLAS inner detector

    International Nuclear Information System (INIS)

    Pinto, B; Amorim, A; Pereira, P; Elsing, M; Hawkings, R; Schieck, J; Garcia, S; Schaffer, A; Ma, H; Anjos, A

    2008-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to reduce the Event Filter rate output, down to a level that allows the storage and processing of these data. These concepts are described in the ATLAS Computing Model which embraces Grid paradigm. The output coming from the Event Filter consists of four main streams: physical stream, express stream, calibration stream, and diagnostic stream. The calibration stream will be transferred to the Tier-0 facilities that will provide the prompt reconstruction of this stream with a minimum latency of 8 hours, producing calibration constants of sufficient quality to allow a first-pass processing. The Inner Detector community is developing and testing an independent common calibration stream selected at the Event Filter after track reconstruction. It is composed of raw data, in byte-stream format, contained in Readout Buffers (ROBs) with hit information of the selected tracks, and it will be used to derive and update a set of calibration and alignment constants. This option was selected because it makes use of the Byte Stream Converter infrastructure and possibly gives better bandwidth usage and storage optimization. Processing is done using specialized algorithms running in the Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. This work is addressing in particular the alignment requirements, the needs for track and hit selection, and the performance issues

  12. Alignment data stream for the ATLAS inner detector

    International Nuclear Information System (INIS)

    Pinto, B

    2010-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to achieve the necessary Event Filter rate output, making possible to optimize the storage and processing needs of these data. These needs are described in the ATLAS Computing Model, which embraces Grid concepts. The output coming from the Event Filter will consist of three main streams: a primary stream, the express stream and the calibration stream. The calibration stream will be transferred to the Tier-0 facilities which will allow the prompt reconstruction of this stream with an admissible latency of 8 hours, producing calibration constants of sufficient quality to permit a first-pass processing. An independent calibration stream is developed and tested, which selects tracks at the level-2 trigger (LVL2) after the reconstruction. The stream is composed of raw data, in byte-stream format, and contains only information of the relevant parts of the detector, in particular the hit information of the selected tracks. This leads to a significantly improved bandwidth usage and storage capability. The stream will be used to derive and update the calibration and alignment constants if necessary every 24h. Processing is done using specialized algorithms running in Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. The work is addressing in particular the alignment requirements, the needs for track and hit selection, timing and bandwidth issues.

  13. Streams and their future inhabitants

    DEFF Research Database (Denmark)

    Sand-Jensen, K.; Friberg, Nikolai

    2006-01-01

    In this fi nal chapter we look ahead and address four questions: How do we improve stream management? What are the likely developments in the biological quality of streams? In which areas is knowledge on stream ecology insuffi cient? What can streams offer children of today and adults of tomorrow?...

  14. The Magellanic Stream and debris clouds

    Energy Technology Data Exchange (ETDEWEB)

    For, B.-Q.; Staveley-Smith, L. [International Centre for Radio Astronomy Research, University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009 (Australia); Matthews, D. [Centre for Materials and Surface Science, La Trobe University, Melbourne, VIC 3086 (Australia); McClure-Griffiths, N. M., E-mail: biqing.for@icrar.org [CSIRO Astronomy and Space Science, Epping, NSW 1710 (Australia)

    2014-09-01

    We present a study of the discrete clouds and filaments in the Magellanic Stream using a new high-resolution survey of neutral hydrogen (H I) conducted with the H75 array of the Australia Telescope Compact Array, complemented by single-dish data from the Parkes Galactic All-Sky Survey. From the individual and combined data sets, we have compiled a catalog of 251 clouds and listed their basic parameters, including a morphological description useful for identifying cloud interactions. We find an unexpectedly large number of head-tail clouds in the region. The implication for the formation mechanism and evolution is discussed. The filaments appear to originate entirely from the Small Magellanic Cloud and extend into the northern end of the Magellanic Bridge.

  15. Invasive tightly coupled processor arrays

    CERN Document Server

    LARI, VAHID

    2016-01-01

    This book introduces new massively parallel computer (MPSoC) architectures called invasive tightly coupled processor arrays. It proposes strategies, architecture designs, and programming interfaces for invasive TCPAs that allow invading and subsequently executing loop programs with strict requirements or guarantees of non-functional execution qualities such as performance, power consumption, and reliability. For the first time, such a configurable processor array architecture consisting of locally interconnected VLIW processing elements can be claimed by programs, either in full or in part, using the principle of invasive computing. Invasive TCPAs provide unprecedented energy efficiency for the parallel execution of nested loop programs by avoiding any global memory access such as GPUs and may even support loops with complex dependencies such as loop-carried dependencies that are not amenable to parallel execution on GPUs. For this purpose, the book proposes different invasion strategies for claiming a desire...

  16. Principles of Adaptive Array Processing

    Science.gov (United States)

    2006-09-01

    ACE with and without tapering (homogeneous case). These analytical results are less suited to predict the detection performance of a real system ...Nickel: Adaptive Beamforming for Phased Array Radars. Proc. Int. Radar Symposium IRS’98 (Munich, Sept. 1998), DGON and VDE /ITG, pp. 897-906.(Reprint also...strategies for airborne radar. Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, CA, 1998, IEEE Cat.Nr. 0-7803-5148-7/98, pp. 1327-1331. [17

  17. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  18. Fast Computation Methods Research for Two Dimensional MUSIC Spectrum Based on Circular Array%圆阵二维 MUSIC 谱快速计算方法研究

    Institute of Scientific and Technical Information of China (English)

    杜政东; 魏平; 赵菲; 尹文禄

    2015-01-01

    针对二维波达方向估计时 MUSIC 谱的快速计算问题,研究了均匀圆阵变换到虚拟线阵的 MUSIC 算法(UCA-ULA-MUSIC)、流形分离 MUSIC 算法(MS-MUSIC)、傅立叶域线性求根 MUSIC 算法(FD-Line-Search-MU-SIC)、基于 FFT 的2n 元均匀圆阵 MUSIC 算法(2n-UCA-FFT-MUSIC)与基于 FFT 的任意圆阵 MUSIC 算法(ACA-FFT-MUSIC)。对各种算法快速计算二维 MUSIC 谱的实现步骤进行了总结。在此基础上,给出了各算法计算二维MUSIC 谱的计算复杂度表达式,并以传统方法为参考,对比了各种快速算法相对于传统方法的计算复杂度比值;同时,针对不同的阵列形式,对适用的快速算法的测向性能进行了仿真对比。根据分析和对比的结果,指出 MS-MUSIC 算法与 ACA-FFT-MUSIC 算法具有更高的工程应用价值,由具体的情况单独或分频段联合使用 MS-MUSIC算法与 ACA-FFT-MUSIC 算法,可以使测向系统较好的兼顾测向性能与时效性。%According to the fast computation problem of MUSIC spectrum in two dimensional direction of arrival estimation, the fast algorithms by manifold transformation or spectrum function transformation are studied.The implementation steps of computation method for two dimensional MUSIC spectrum by these algorithms are summarized.Furthermore,expressions for computational complexity of discussed algorithms in computing two dimensional MUSIC spectrum are presented.With refer-ence to the conventional method,the ratio of computational complexity of discussed algorithms is compared.Meanwhile,for different circular arrays,the direction finding performance of applicable algorithms is compared by simulation.It is proved that the MUSIC algorithm based on Manifold Separation (MS-MUSIC)and Fast Fourier Transformation (FFT)which suits to arbitrary circular array (ACA-FFT-MUSIC)have higher engineering value according to the results of analysis and com-parison.The performance and

  19. StreamQRE: Modular Specification and Efficient Evaluation of Quantitative Queries over Streaming Data.

    Science.gov (United States)

    Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G; Khanna, Sanjeev

    2017-06-01

    Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.

  20. GPGPU COMPUTING

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2012-05-01

    Full Text Available Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM and the new framwork, OpenCL that tries to unify the GPGPU computing models.

  1. The Integration of Environmental Constraints into Tidal Array Optimisation

    Science.gov (United States)

    du Feu, Roan; de Trafford, Sebastian; Culley, Dave; Hill, Jon; Funke, Simon W.; Kramer, Stephan C.; Piggott, Matthew D.

    2015-04-01

    It has been estimated by The Carbon Trust that the marine renewable energy sector, of which tidal stream turbines are projected to play a large part, could produce 20% of the UK's present electricity requirements. This has lead to the important question of how this technology can be deployed in an economically and environmentally friendly manner. Work is currently under way to understand how the tidal turbines that constitute an array can be arranged to maximise the total power generated by that array. The work presented here continues this through the inclusion of environmental constraints. The benefits of the renewable energy sector to our environment at large are not in question. However, the question remains as to the effects this burgeoning sector will have on local environments, and how to mitigate these effects if they are detrimental. For example, the presence of tidal arrays can, through altering current velocity, drastically change the sediment transport into and out of an area along with re-suspending existing sediment. This can have the effects of scouring or submerging habitat, mobilising contaminants within the existing sediment, reducing food supply and altering the turbidity of the water. All of which greatly impact upon any fauna in the affected region. This work pays particular attention to the destruction of habitat of benthic fauna, as this is quantifiable as a direct result of change in the current speed; a primary factor in determining sediment accumulation on the sea floor. OpenTidalFarm is an open source tool that maximises the power generated by an array through repositioning the turbines within it. It currently uses a 2D shallow water model with turbines represented as bump functions of increased friction. The functional of interest, power extracted by the array, is evaluated from the flow field which is calculated at each iteration using a finite element method. A gradient-based local optimisation is then used through solving the

  2. LHCb : The LHCb Turbo stream

    CERN Multimedia

    Puig Navarro, Albert

    2015-01-01

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction. In the "turbo stream" the trigger will write out a compact summary of "physics" objects containing all information necessary for analyses, and this will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissioned and developed during...

  3. Fiber Laser Array

    National Research Council Canada - National Science Library

    Simpson, Thomas

    2002-01-01

    ...., field-dependent, loss within the coupled laser array. During this program, Jaycor focused on the construction and use of an experimental apparatus that can be used to investigate the coherent combination of an array of fiber lasers...

  4. Dual-stream accounts bridge the gap between monkey audition and human language processing. Comment on "Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain" by Michael Arbib

    Science.gov (United States)

    Garrod, Simon; Pickering, Martin J.

    2016-03-01

    Over the last few years there has been a resurgence of interest in dual-stream dorsal-ventral accounts of language processing [4]. This has led to recent attempts to bridge the gap between the neurobiology of primate audition and human language processing with the dorsal auditory stream assumed to underlie time-dependent (and syntactic) processing and the ventral to underlie some form of time-independent (and semantic) analysis of the auditory input [3,10]. Michael Arbib [1] considers these developments in relation to his earlier Mirror System Hypothesis about the origins of human language processing [11].

  5. Streaming from the Equator of a Drop in an External Electric Field.

    Science.gov (United States)

    Brosseau, Quentin; Vlahovska, Petia M

    2017-07-21

    Tip streaming generates micron- and submicron-sized droplets when a thin thread pulled from the pointy end of a drop disintegrates. Here, we report streaming from the equator of a drop placed in a uniform electric field. The instability generates concentric fluid rings encircling the drop, which break up to form an array of microdroplets in the equatorial plane. We show that the streaming results from an interfacial instability at the stagnation line of the electrohydrodynamic flow, which creates a sharp edge. The flow draws from the equator a thin sheet which destabilizes and sheds fluid cylinders. This streaming phenomenon provides a new route for generating monodisperse microemulsions.

  6. Alignment data streams for the ATLAS inner detector

    CERN Document Server

    Pinto, B; Pereira, P; Elsing, M; Hawkings, R; Schieck, J; García, S; Schaffer, A; Ma, H; Anjos, A

    2008-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to reduce the Event Filter rate output, down to a level that allows the storage and processing of these data. These concepts are described in the ATLAS Computing Model which embraces Grid paradigm. The output coming from the Event Filter consists of four main streams: physical stream, express stream, calibration stream, and diagnostic stream. The calibration stream will be transferred to the Tier-0 facilities that will provide the prompt reconstruction of this stream with a minimum latency of 8 hours, producing calibration constants of sufficient quality to allow a first-pass processing. The Inner Detector community is developing and testing an independent common calibration stream selected at the Event Filter after track reconstruction. It is composed of raw data, in byte-stream format, contained in Readout Buffers (ROBs) with hit information of the selected tracks, and it will be used to derive and update a set of calibration and alignment cons...

  7. The Rabbit Stream Cipher

    DEFF Research Database (Denmark)

    Boesgaard, Martin; Vesterager, Mette; Zenner, Erik

    2008-01-01

    The stream cipher Rabbit was first presented at FSE 2003, and no attacks against it have been published until now. With a measured encryption/decryption speed of 3.7 clock cycles per byte on a Pentium III processor, Rabbit does also provide very high performance. This paper gives a concise...... description of the Rabbit design and some of the cryptanalytic results available....

  8. Music Streaming in Denmark

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Rex

    This report analyses how a ’per user’ settlement model differs from the ‘pro rata’ model currently used. The analysis is based on data for all streams by WiMP users in Denmark during August 2013. The analysis has been conducted in collaboration with Christian Schlelein from Koda on the basis of d...

  9. Academic streaming in Europe

    DEFF Research Database (Denmark)

    Falaschi, Alessandro; Mønster, Dan; Doležal, Ivan

    2004-01-01

    The TF-NETCAST task force was active from March 2003 to March 2004, and during this time the mem- bers worked on various aspects of streaming media related to the ultimate goal of setting up common services and infrastructures to enable netcasting of high quality content to the academic community...

  10. Degree-of-Freedom Strengthened Cascade Array for DOD-DOA Estimation in MIMO Array Systems.

    Science.gov (United States)

    Yao, Bobin; Dong, Zhi; Zhang, Weile; Wang, Wei; Wu, Qisheng

    2018-05-14

    In spatial spectrum estimation, difference co-array can provide extra degrees-of-freedom (DOFs) for promoting parameter identifiability and parameter estimation accuracy. For the sake of acquiring as more DOFs as possible with a given number of physical sensors, we herein design a novel sensor array geometry named cascade array. This structure is generated by systematically connecting a uniform linear array (ULA) and a non-uniform linear array, and can provide more DOFs than some exist array structures but less than the upper-bound indicated by minimum redundant array (MRA). We further apply this cascade array into multiple input multiple output (MIMO) array systems, and propose a novel joint direction of departure (DOD) and direction of arrival (DOA) estimation algorithm, which is based on a reduced-dimensional weighted subspace fitting technique. The algorithm is angle auto-paired and computationally efficient. Theoretical analysis and numerical simulations prove the advantages and effectiveness of the proposed array structure and the related algorithm.

  11. Multiwall carbon nanotube microcavity arrays

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Rajib; Butt, Haider, E-mail: h.butt@bham.ac.uk [Nanotechnology Laboratory, School of Mechanical Engineering, University of Birmingham, Birmingham B15 2TT (United Kingdom); Rifat, Ahmmed A. [Integrated Lightwave Research Group, Department of Electrical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603 (Malaysia); Yetisen, Ali K.; Yun, Seok Hyun [Harvard Medical School and Wellman Center for Photomedicine, Massachusetts General Hospital, 65 Landsdowne Street, Cambridge, Massachusetts 02139 (United States); Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Dai, Qing [National Center for Nanoscience and Technology, Beijing 100190 (China)

    2016-03-21

    Periodic highly dense multi-wall carbon nanotube (MWCNT) arrays can act as photonic materials exhibiting band gaps in the visible regime and beyond terahertz range. MWCNT arrays in square arrangement for nanoscale lattice constants can be configured as a microcavity with predictable resonance frequencies. Here, computational analyses of compact square microcavities (≈0.8 × 0.8 μm{sup 2}) in MWCNT arrays were demonstrated to obtain enhanced quality factors (≈170–180) and narrow-band resonance peaks. Cavity resonances were rationally designed and optimized (nanotube geometry and cavity size) with finite element method. Series (1 × 2 and 1 × 3) and parallel (2 × 1 and 3 × 1) combinations of microcavities were modeled and resonance modes were analyzed. Higher order MWCNT microcavities showed enhanced resonance modes, which were red shifted with increasing Q-factors. Parallel microcavity geometries were also optimized to obtain narrow-band tunable filtering in low-loss communication windows (810, 1336, and 1558 nm). Compact series and parallel MWCNT microcavity arrays may have applications in optical filters and miniaturized optical communication devices.

  12. Radiation streaming in power reactors. [PWR; BWR

    Energy Technology Data Exchange (ETDEWEB)

    Lahti, G.P.; Lee, R.R.; Courtney, J.C. (eds.)

    1979-02-01

    Separate abstracts are included for each of the 14 papers given at a special session on Radiation Streaming in Power Reactors held on November 15 at the American Nuclear Society 1978 Winter Meeting in Washington, D.C. The papers describe the methods of calculation, the engineering of shields, and the measurement of radiation environments within the containments of light water power reactors. Comparisons of measured and calculated data are used to determine the accuracy of computer predictions of the radiation environment. Specific computational and measurement techniques are described and evaluated. Emphasis is on radiation streaming in the annular region between the reactor vesel and the primary shield and its resultant environment within the primary containment.

  13. Hybrid Arrays for Chemical Sensing

    Science.gov (United States)

    Kramer, Kirsten E.; Rose-Pehrsson, Susan L.; Johnson, Kevin J.; Minor, Christian P.

    In recent years, multisensory approaches to environment monitoring for chemical detection as well as other forms of situational awareness have become increasingly popular. A hybrid sensor is a multimodal system that incorporates several sensing elements and thus produces data that are multivariate in nature and may be significantly increased in complexity compared to data provided by single-sensor systems. Though a hybrid sensor is itself an array, hybrid sensors are often organized into more complex sensing systems through an assortment of network topologies. Part of the reason for the shift to hybrid sensors is due to advancements in sensor technology and computational power available for processing larger amounts of data. There is also ample evidence to support the claim that a multivariate analytical approach is generally superior to univariate measurements because it provides additional redundant and complementary information (Hall, D. L.; Linas, J., Eds., Handbook of Multisensor Data Fusion, CRC, Boca Raton, FL, 2001). However, the benefits of a multisensory approach are not automatically achieved. Interpretation of data from hybrid arrays of sensors requires the analyst to develop an application-specific methodology to optimally fuse the disparate sources of data generated by the hybrid array into useful information characterizing the sample or environment being observed. Consequently, multivariate data analysis techniques such as those employed in the field of chemometrics have become more important in analyzing sensor array data. Depending on the nature of the acquired data, a number of chemometric algorithms may prove useful in the analysis and interpretation of data from hybrid sensor arrays. It is important to note, however, that the challenges posed by the analysis of hybrid sensor array data are not unique to the field of chemical sensing. Applications in electrical and process engineering, remote sensing, medicine, and of course, artificial

  14. A Lightweight Protocol for Secure Video Streaming.

    Science.gov (United States)

    Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis

    2018-05-14

    The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.

  15. realfast: Real-time, Commensal Fast Transient Surveys with the Very Large Array

    Science.gov (United States)

    Law, C. J.; Bower, G. C.; Burke-Spolaor, S.; Butler, B. J.; Demorest, P.; Halle, A.; Khudikyan, S.; Lazio, T. J. W.; Pokorny, M.; Robnett, J.; Rupen, M. P.

    2018-05-01

    Radio interferometers have the ability to precisely localize and better characterize the properties of sources. This ability is having a powerful impact on the study of fast radio transients, where a few milliseconds of data is enough to pinpoint a source at cosmological distances. However, recording interferometric data at millisecond cadence produces a terabyte-per-hour data stream that strains networks, computing systems, and archives. This challenge mirrors that of other domains of science, where the science scope is limited by the computational architecture as much as the physical processes at play. Here, we present a solution to this problem in the context of radio transients: realfast, a commensal, fast transient search system at the Jansky Very Large Array. realfast uses a novel architecture to distribute fast-sampled interferometric data to a 32-node, 64-GPU cluster for real-time imaging and transient detection. By detecting transients in situ, we can trigger the recording of data for those rare, brief instants when the event occurs and reduce the recorded data volume by a factor of 1000. This makes it possible to commensally search a data stream that would otherwise be impossible to record. This system will search for millisecond transients in more than 1000 hr of data per year, potentially localizing several Fast Radio Bursts, pulsars, and other sources of impulsive radio emission. We describe the science scope for realfast, the system design, expected outcomes, and ways in which real-time analysis can help in other fields of astrophysics.

  16. Patch holography using a double layer microphone array

    DEFF Research Database (Denmark)

    Gomes, Jesper Skovhus

    a closed local element mesh that surrounds the microphone array, and with a part of the mesh coinciding with a patch, the entire source is not needed in the model. Since the array has two layers, sources/reflections behind the array are also allowed. The Equivalent Source Method (ESM) is another technique...... in which the sound field is represented by a set of monopoles placed inside the source. In this paper these monopoles are distributed so that they surround the array, and the reconstruction is compared with the IBEM-based approach. The comparisons are based on computer simulations with a planar double...... layer array and sources with different shapes....

  17. Effect of wire shape on wire array discharge

    Energy Technology Data Exchange (ETDEWEB)

    Shimomura, N.; Tanaka, Y.; Yushita, Y.; Nagata, M. [University of Tokushima, Department of Electrical and Electronic Engineering, Tokushima (Japan); Teramoto, Y.; Katsuki, S.; Akiyama, H. [Kumamoto University, Department of Electrical and Computer Engineering, Kumamoto (Japan)

    2001-09-01

    Although considerable investigations have been reported on z-pinches to achieve nuclear fusion, little attention has been given from the point of view of how a wire array consisting of many parallel wires explodes. Instability existing in the wire array discharge has been shown. In this paper, the effect of wire shape in the wire array on unstable behavior of the wire array discharge is represented by numerical analysis. The claws on the wire formed in installation of wire may cause uniform current distribution on wire array. The effect of error of wire diameter in production is computed by Monte Carlo Method. (author)

  18. Effect of wire shape on wire array discharge

    International Nuclear Information System (INIS)

    Shimomura, N.; Tanaka, Y.; Yushita, Y.; Nagata, M.; Teramoto, Y.; Katsuki, S.; Akiyama, H.

    2001-01-01

    Although considerable investigations have been reported on z-pinches to achieve nuclear fusion, little attention has been given from the point of view of how a wire array consisting of many parallel wires explodes. Instability existing in the wire array discharge has been shown. In this paper, the effect of wire shape in the wire array on unstable behavior of the wire array discharge is represented by numerical analysis. The claws on the wire formed in installation of wire may cause uniform current distribution on wire array. The effect of error of wire diameter in production is computed by Monte Carlo Method. (author)

  19. An efficient method for evaluating RRAM crossbar array performance

    Science.gov (United States)

    Song, Lin; Zhang, Jinyu; Chen, An; Wu, Huaqiang; Qian, He; Yu, Zhiping

    2016-06-01

    An efficient method is proposed in this paper to mitigate computational burden in resistive random access memory (RRAM) array simulation. In the worst case scenario, a 4 Mb RRAM array with line resistance is greatly reduced using this method. For 1S1R-RRAM array structures, static and statistical parameters in both reading and writing processes are simulated. Error analysis is performed to prove the reliability of the algorithm when line resistance is extremely small compared with the junction resistance. Results show that high precision is maintained even if the size of RRAM array is reduced by one thousand times, which indicates significant improvements in both computational efficiency and memory requirements.

  20. A Method for Calculating the Mean Orbits of Meteor Streams

    Science.gov (United States)

    Voloshchuk, Yu. I.; Kashcheev, B. L.

    An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.

  1. Streaming gravity mode instability

    International Nuclear Information System (INIS)

    Wang Shui.

    1989-05-01

    In this paper, we study the stability of a current sheet with a sheared flow in a gravitational field which is perpendicular to the magnetic field and plasma flow. This mixing mode caused by a combined role of the sheared flow and gravity is named the streaming gravity mode instability. The conditions of this mode instability are discussed for an ideal four-layer model in the incompressible limit. (author). 5 refs

  2. Autonomous Byte Stream Randomizer

    Science.gov (United States)

    Paloulian, George K.; Woo, Simon S.; Chow, Edward T.

    2013-01-01

    Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

  3. Prototype of a production system for Cherenkov Telescope Array with DIRAC

    CERN Document Server

    Arrabito, L; Haupt, A; Graciani Diaz, R; Stagni, F; Tsaregorodtsev, A

    2015-01-01

    The Cherenkov Telescope Array (CTA) — an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale — is the next generation instrument in the field of very high energy gamma-ray astronomy. CTA will operate as an open observatory providing data products to the scientific community. An average data stream of about 10 GB/s for about 1000 hours of observation per year, thus producing several PB/year, is expected. Large CPU time is required for data-processing as well for massive Monte Carlo simulations needed for detector calibration purposes. The current CTA computing model is based on a distributed infrastructure for the archive and the data off-line processing. In order to manage the off-line data-processing in a distributed environment, CTA has evaluated the DIRAC (Distributed Infrastructure with Remote Agent Control) system, which is a general framework for the management of tasks over distributed heterogeneous computing environments. In particular, a production sy...

  4. Re-Meandering of Lowland Streams

    DEFF Research Database (Denmark)

    Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai

    2014-01-01

    We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats and macroinver...

  5. Stream processing health card application.

    Science.gov (United States)

    Polat, Seda; Gündem, Taflan Imre

    2012-10-01

    In this paper, we propose a data stream management system embedded to a smart card for handling and storing user specific summaries of streaming data coming from medical sensor measurements and/or other medical measurements. The data stream management system that we propose for a health card can handle the stream data rates of commonly known medical devices and sensors. It incorporates a type of context awareness feature that acts according to user specific information. The proposed system is cheap and provides security for private data by enhancing the capabilities of smart health cards. The stream data management system is tested on a real smart card using both synthetic and real data.

  6. Carbon nanotube nanoelectrode arrays

    Science.gov (United States)

    Ren, Zhifeng; Lin, Yuehe; Yantasee, Wassana; Liu, Guodong; Lu, Fang; Tu, Yi

    2008-11-18

    The present invention relates to microelectode arrays (MEAs), and more particularly to carbon nanotube nanoelectrode arrays (CNT-NEAs) for chemical and biological sensing, and methods of use. A nanoelectrode array includes a carbon nanotube material comprising an array of substantially linear carbon nanotubes each having a proximal end and a distal end, the proximal end of the carbon nanotubes are attached to a catalyst substrate material so as to form the array with a pre-determined site density, wherein the carbon nanotubes are aligned with respect to one another within the array; an electrically insulating layer on the surface of the carbon nanotube material, whereby the distal end of the carbon nanotubes extend beyond the electrically insulating layer; a second adhesive electrically insulating layer on the surface of the electrically insulating layer, whereby the distal end of the carbon nanotubes extend beyond the second adhesive electrically insulating layer; and a metal wire attached to the catalyst substrate material.

  7. Josephson junction arrays

    International Nuclear Information System (INIS)

    Bindslev Hansen, J.; Lindelof, P.E.

    1985-01-01

    In this review we intend to cover recent work involving arrays of Josephson junctions. The work on such arrays falls naturally into three main areas of interest: 1. Technical applications of Josephson junction arrays for high-frequency devices. 2. Experimental studies of 2-D model systems (Kosterlitz-Thouless phase transition, commensurate-incommensurate transition in frustrated (flux) lattices). 3. Investigations of phenomena associated with non-equilibrium superconductivity in and around Josephson junctions (with high current density). (orig./BUD)

  8. Phased-array radars

    Science.gov (United States)

    Brookner, E.

    1985-02-01

    The operating principles, technology, and applications of phased-array radars are reviewed and illustrated with diagrams and photographs. Consideration is given to the antenna elements, circuitry for time delays, phase shifters, pulse coding and compression, and hybrid radars combining phased arrays with lenses to alter the beam characteristics. The capabilities and typical hardware of phased arrays are shown using the US military systems COBRA DANE and PAVE PAWS as examples.

  9. Storage array reflection considerations

    International Nuclear Information System (INIS)

    Haire, M.J.; Jordan, W.C.; Taylor, R.G.

    1997-01-01

    The assumptions used for reflection conditions of single containers are fairly well established and consistently applied throughout the industry in nuclear criticality safety evaluations. Containers are usually considered to be either fully water reflected (i.e., surrounded by 6 to 12 in. of water) for safety calculations or reflected by 1 in. of water for nominal (structural material and air) conditions. Tables and figures are usually available for performing comparative evaluations of containers under various loading conditions. Reflection considerations used for evaluating the safety of storage arrays of fissile material are not as well established. When evaluating arrays, it has become more common for analysts to use calculations to demonstrate the safety of the array configuration. In performing these calculations, the analyst has considerable freedom concerning the assumptions made for modeling the reflection of the array. Considerations are given for the physical layout of the array with little or no discussion (or demonstration) of what conditions are bounded by the assumed reflection conditions. For example, an array may be generically evaluated by placing it in a corner of a room in which the opposing walls are far away. Typically, it is believed that complete flooding of the room is incredible, so the array is evaluated for various levels of water mist interspersed among array containers. This paper discusses some assumptions that are made regarding storage array reflection

  10. The EUROBALL array

    International Nuclear Information System (INIS)

    Rossi Alvarez, C.

    1998-01-01

    The quality of the multidetector array EUROBALL is described, with emphasis on the history and formal organization of the related European collaboration. The detector layout is presented together with the electronics and Data Acquisition capabilities. The status of the instrument, its performances and the main features of some recently developed ancillary detectors will also be described. The EUROBALL array is operational in Legnaro National Laboratory (Italy) since April 1997 and is expected to run up to November 1998. The array represents a significant improvement in detector efficiency and sensitivity with respect to the previous generation of multidetector arrays

  11. Rectenna array measurement results

    Science.gov (United States)

    Dickinson, R. M.

    1980-01-01

    The measured performance characteristics of a rectenna array are reviewed and compared to the performance of a single element. It is shown that the performance may be extrapolated from the individual element to that of the collection of elements. Techniques for current and voltage combining were demonstrated. The array performance as a function of various operating parameters is characterized and techniques for overvoltage protection and automatic fault clearing in the array demonstrated. A method for detecting failed elements also exists. Instrumentation for deriving performance effectiveness is described. Measured harmonic radiation patterns and fundamental frequency scattered patterns for a low level illumination rectenna array are presented.

  12. Arrayed waveguide Sagnac interferometer.

    Science.gov (United States)

    Capmany, José; Muñoz, Pascual; Sales, Salvador; Pastor, Daniel; Ortega, Beatriz; Martinez, Alfonso

    2003-02-01

    We present a novel device, an arrayed waveguide Sagnac interferometer, that combines the flexibility of arrayed waveguides and the wide application range of fiber or integrated optics Sagnac loops. We form the device by closing an array of wavelength-selective light paths provided by two arrayed waveguides with a single 2 x 2 coupler in a Sagnac configuration. The equations that describe the device's operation in general conditions are derived. A preliminary experimental demonstration is provided of a fiber prototype in passive operation that shows good agreement with the expected theoretical performance. Potential applications of the device in nonlinear operation are outlined and discussed.

  13. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    Science.gov (United States)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  14. X-ray source array

    International Nuclear Information System (INIS)

    Cooperstein, G.; Lanza, R.C.; Sohval, A.R.

    1983-01-01

    A circular array of cold cathode diode X-ray sources, for radiation imaging applications, such as computed tomography includes electrically conductive cathode plates each of which cooperates with at least two anodes to form at least two diode sources. In one arrangement, two annular cathodes are separated by radially extending, rod-like anodes. Field enhancement blades may be provided on the cathodes. In an alternative arrangement, the cathode plates extend radially and each pair is separated by an anode plate also extending radially. (author)

  15. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  16. High performance multiple stream data transfer

    International Nuclear Information System (INIS)

    Rademakers, F.; Saiz, P.

    2001-01-01

    The ALICE detector at LHC (CERN), will record raw data at a rate of 1.2 Gigabytes per second. Trying to analyse all this data at CERN will not be feasible. As originally proposed by the MONARC project, data collected at CERN will be transferred to remote centres to use their computing infrastructure. The remote centres will reconstruct and analyse the events, and make available the results. Therefore high-rate data transfer between computing centres (Tiers) will become of paramount importance. The authors will present several tests that have been made between CERN and remote centres in Padova (Italy), Torino (Italy), Catania (Italy), Lyon (France), Ohio (United States), Warsaw (Poland) and Calcutta (India). These tests consisted, in a first stage, of sending raw data from CERN to the remote centres and back, using a ftp method that allows connections of several streams at the same time. Thanks to these multiple streams, it is possible to increase the rate at which the data is transferred. While several 'multiple stream ftp solutions' already exist, the authors' method is based on a parallel socket implementation which allows, besides files, also objects (or any large message) to be send in parallel. A prototype will be presented able to manage different transfers. This is the first step of a system to be implemented that will be able to take care of the connections with the remote centres to exchange data and monitor the status of the transfer

  17. Gamma streaming experiments for validation of Monte Carlo code

    International Nuclear Information System (INIS)

    Thilagam, L.; Mohapatra, D.K.; Subbaiah, K.V.; Iliyas Lone, M.; Balasubramaniyan, V.

    2012-01-01

    In-homogeneities in shield structures lead to considerable amount of leakage radiation (streaming) increasing the radiation levels in accessible areas. Development works on experimental as well as computational methods for quantifying this streaming radiation are still continuing. Monte Carlo based radiation transport code, MCNP is usually a tool for modeling and analyzing such problems involving complex geometries. In order to validate this computational method for streaming analysis, it is necessary to carry out some experimental measurements simulating these inhomogeneities like ducts and voids present in the bulk shields for typical cases. The data thus generated will be analysed by simulating the experimental set up employing MCNP code and optimized input parameters for the code in finding solutions for similar radiation streaming problems will be formulated. Comparison of experimental data obtained from radiation streaming experiments through ducts will give a set of thumb rules and analytical fits for total radiation dose rates within and outside the duct. The present study highlights the validation of MCNP code through the gamma streaming experiments carried out with the ducts of various shapes and dimensions. Over all, the present study throws light on suitability of MCNP code for the analysis of gamma radiation streaming problems for all duct configurations considered. In the present study, only dose rate comparisons have been made. Studies on spectral comparison of streaming radiation are in process. Also, it is planned to repeat the experiments with various shield materials. Since the penetrations and ducts through bulk shields are unavoidable in an operating nuclear facility the results on this kind of radiation streaming simulations and experiments will be very useful in the shield structure optimization without compromising the radiation safety

  18. Nitrogen saturation in stream ecosystems.

    Science.gov (United States)

    Earl, Stevan R; Valett, H Maurice; Webster, Jackson R

    2006-12-01

    The concept of nitrogen (N) saturation has organized the assessment of N loading in terrestrial ecosystems. Here we extend the concept to lotic ecosystems by coupling Michaelis-Menten kinetics and nutrient spiraling. We propose a series of saturation response types, which may be used to characterize the proximity of streams to N saturation. We conducted a series of short-term N releases using a tracer (15NO3-N) to measure uptake. Experiments were conducted in streams spanning a gradient of background N concentration. Uptake increased in four of six streams as NO3-N was incrementally elevated, indicating that these streams were not saturated. Uptake generally corresponded to Michaelis-Menten kinetics but deviated from the model in two streams where some other growth-critical factor may have been limiting. Proximity to saturation was correlated to background N concentration but was better predicted by the ratio of dissolved inorganic N (DIN) to soluble reactive phosphorus (SRP), suggesting phosphorus limitation in several high-N streams. Uptake velocity, a reflection of uptake efficiency, declined nonlinearly with increasing N amendment in all streams. At the same time, uptake velocity was highest in the low-N streams. Our conceptual model of N transport, uptake, and uptake efficiency suggests that, while streams may be active sites of N uptake on the landscape, N saturation contributes to nonlinear changes in stream N dynamics that correspond to decreased uptake efficiency.

  19. Many - body simulations using an array processor

    International Nuclear Information System (INIS)

    Rapaport, D.C.

    1985-01-01

    Simulations of microscopic models of water and polypeptides using molecular dynamics and Monte Carlo techniques have been carried out with the aid of an FPS array processor. The computational techniques are discussed, with emphasis on the development and optimization of the software to take account of the special features of the processor. The computing requirements of these simulations exceed what could be reasonably carried out on a normal 'scientific' computer. While the FPS processor is highly suited to the kinds of models described, several other computationally intensive problems in statistical mechanics are outlined for which alternative processor architectures are more appropriate

  20. Oscillating acoustic streaming jet

    International Nuclear Information System (INIS)

    Moudjed, Brahim; Botton, Valery; Henry, Daniel; Millet, Severine; Ben Hadid, Hamda; Garandet, Jean-Paul

    2014-01-01

    The present paper provides the first experimental investigation of an oscillating acoustic streaming jet. The observations are performed in the far field of a 2 MHz circular plane ultrasound transducer introduced in a rectangular cavity filled with water. Measurements are made by Particle Image Velocimetry (PIV) in horizontal and vertical planes near the end of the cavity. Oscillations of the jet appear in this zone, for a sufficiently high Reynolds number, as an intermittent phenomenon on an otherwise straight jet fluctuating in intensity. The observed perturbation pattern is similar to that of former theoretical studies. This intermittently oscillatory behavior is the first step to the transition to turbulence. (authors)

  1. Introduction to parallel algorithms and architectures arrays, trees, hypercubes

    CERN Document Server

    Leighton, F Thomson

    1991-01-01

    Introduction to Parallel Algorithms and Architectures: Arrays Trees Hypercubes provides an introduction to the expanding field of parallel algorithms and architectures. This book focuses on parallel computation involving the most popular network architectures, namely, arrays, trees, hypercubes, and some closely related networks.Organized into three chapters, this book begins with an overview of the simplest architectures of arrays and trees. This text then presents the structures and relationships between the dominant network architectures, as well as the most efficient parallel algorithms for

  2. Improved SNR of phased-array PERES coils via simulation study

    International Nuclear Information System (INIS)

    RodrIguez, Alfredo O; Medina, LucIa

    2005-01-01

    A computational comparison of signal-to-noise ratio (SNR) was performed between a conventional phased array of two circular-shaped coils and a petal resonator surface array. The quasi-static model and phased-array optimum SNR were combined to derive an SNR formula for each array. Analysis of mutual inductance between coil petals was carried out to compute the optimal coil separation and optimum number of petal coils. Mutual interaction between coil arrays was not included in the model because this does not drastically affect coil performance. Phased arrays of PERES coils show a 114% improvement in SNR over that of the simplest circular configuration. (note)

  3. Triggering the GRANDE array

    International Nuclear Information System (INIS)

    Wilson, C.L.; Bratton, C.B.; Gurr, J.; Kropp, W.; Nelson, M.; Sobel, H.; Svoboda, R.; Yodh, G.; Burnett, T.; Chaloupka, V.; Wilkes, R.J.; Cherry, M.; Ellison, S.B.; Guzik, T.G.; Wefel, J.; Gaidos, J.; Loeffler, F.; Sembroski, G.; Goodman, J.; Haines, T.J.; Kielczewska, D.; Lane, C.; Steinberg, R.; Lieber, M.; Nagle, D.; Potter, M.; Tripp, R.

    1990-01-01

    A brief description of the Gamma Ray And Neutrino Detector Experiment (GRANDE) is presented. The detector elements and electronics are described. The trigger logic for the array is then examined. The triggers for the Gamma Ray and the Neutrino portions of the array are treated separately. (orig.)

  4. The metaphors we stream by: Making sense of music streaming

    OpenAIRE

    Hagen, Anja Nylund

    2016-01-01

    In Norway music-streaming services have become mainstream in everyday music listening. This paper examines how 12 heavy streaming users make sense of their experiences with Spotify and WiMP Music (now Tidal). The analysis relies on a mixed-method qualitative study, combining music-diary self-reports, online observation of streaming accounts, Facebook and last.fm scrobble-logs, and in-depth interviews. By drawing on existing metaphors of Internet experiences we demonstrate that music-streaming...

  5. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  6. Tracking Gendered Streams

    Directory of Open Access Journals (Sweden)

    Maria Eriksson

    2017-10-01

    Full Text Available One of the most prominent features of digital music services is the provision of personalized music recommendations that come about through the profiling of users and audiences. Based on a range of "bot experiments," this article investigates if, and how, gendered patterns in music recommendations are provided by the streaming service Spotify. While our experiments did not give any strong indications that Spotify assigns different taste profiles to male and female users, the study showed that male artists were highly overrepresented in Spotify's music recommendations; an issue which we argue prompts users to cite hegemonic masculine norms within the music industries. Although the results should be approached as historically and contextually contingent, we argue that they point to how gender and gendered tastes may be constituted through the interplay between users and algorithmic knowledge-making processes, and how digital content delivery may maintain and challenge gender relations and gendered power differentials within the music industries. Seen through the lens of critical research on software, music and gender performativity, the experiments thus provide insights into how gender is shaped and attributed meaning as it materializes in contemporary music streams.

  7. Application of multiplicative array techniques for multibeam sounder systems

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    modification in terms of additional computation or hardware for improved array gain. The present work is devoted towards the study of a better beamforming method i.e. a multiplicative array technique with some modification proposEd. by Brown and Rowland...

  8. Data acquisition for experiments with multi-detector arrays

    Indian Academy of Sciences (India)

    Experiments with multi-detector arrays have special requirements and place higher demands on computer data acquisition systems. In this contribution we discuss data acquisition systems with special emphasis on multi-detector arrays and in particular we describe a new data acquisition system, AMPS which we have ...

  9. A stream cipher based on a spatiotemporal chaotic system

    International Nuclear Information System (INIS)

    Li Ping; Li Zhong; Halang, Wolfgang A.; Chen Guanrong

    2007-01-01

    A stream cipher based on a spatiotemporal chaotic system is proposed. A one-way coupled map lattice consisting of logistic maps is served as the spatiotemporal chaotic system. Multiple keystreams are generated from the coupled map lattice by using simple algebraic computations, and then are used to encrypt plaintext via bitwise XOR. These make the cipher rather simple and efficient. Numerical investigation shows that the cryptographic properties of the generated keystream are satisfactory. The cipher seems to have higher security, higher efficiency and lower computation expense than the stream cipher based on a spatiotemporal chaotic system proposed recently

  10. Morphology of a Wetland Stream

    Science.gov (United States)

    Jurmu; Andrle

    1997-11-01

    / Little attention has been paid to wetland stream morphology in the geomorphological and environmental literature, and in the recently expanding wetland reconstruction field, stream design has been based primarily on stream morphologies typical of nonwetland alluvial environments. Field investigation of a wetland reach of Roaring Brook, Stafford, Connecticut, USA, revealed several significant differences between the morphology of this stream and the typical morphology of nonwetland alluvial streams. Six morphological features of the study reach were examined: bankfull flow, meanders, pools and riffles, thalweg location, straight reaches, and cross-sectional shape. It was found that bankfull flow definitions originating from streams in nonwetland environments did not apply. Unusual features observed in the wetland reach include tight bends and a large axial wavelength to width ratio. A lengthy straight reach exists that exceeds what is typically found in nonwetland alluvial streams. The lack of convex bank point bars in the bends, a greater channel width at riffle locations, an unusual thalweg location, and small form ratios (a deep and narrow channel) were also differences identified. Further study is needed on wetland streams of various regions to determine if differences in morphology between alluvial and wetland environments can be applied in order to improve future designs of wetland channels.KEY WORDS: Stream morphology; Wetland restoration; Wetland creation; Bankfull; Pools and riffles; Meanders; Thalweg

  11. Stream on the Sky: Outsourcing Access Control Enforcement for Stream Data to the Cloud

    OpenAIRE

    Dinh, Tien Tuan Anh; Datta, Anwitaman

    2012-01-01

    There is an increasing trend for businesses to migrate their systems towards the cloud. Security concerns that arise when outsourcing data and computation to the cloud include data confidentiality and privacy. Given that a tremendous amount of data is being generated everyday from plethora of devices equipped with sensing capabilities, we focus on the problem of access controls over live streams of data based on triggers or sliding windows, which is a distinct and more challenging problem tha...

  12. Flow Field and Acoustic Predictions for Three-Stream Jets

    Science.gov (United States)

    Simmons, Shaun Patrick; Henderson, Brenda S.; Khavaran, Abbas

    2014-01-01

    Computational fluid dynamics was used to analyze a three-stream nozzle parametric design space. The study varied bypass-to-core area ratio, tertiary-to-core area ratio and jet operating conditions. The flowfield solutions from the Reynolds-Averaged Navier-Stokes (RANS) code Overflow 2.2e were used to pre-screen experimental models for a future test in the Aero-Acoustic Propulsion Laboratory (AAPL) at the NASA Glenn Research Center (GRC). Flowfield solutions were considered in conjunction with the jet-noise-prediction code JeNo to screen the design concepts. A two-stream versus three-stream computation based on equal mass flow rates showed a reduction in peak turbulent kinetic energy (TKE) for the three-stream jet relative to that for the two-stream jet which resulted in reduced acoustic emission. Additional three-stream solutions were analyzed for salient flowfield features expected to impact farfield noise. As tertiary power settings were increased there was a corresponding near nozzle increase in shear rate that resulted in an increase in high frequency noise and a reduction in peak TKE. As tertiary-to-core area ratio was increased the tertiary potential core elongated and the peak TKE was reduced. The most noticeable change occurred as secondary-to-core area ratio was increased thickening the secondary potential core, elongating the primary potential core and reducing peak TKE. As forward flight Mach number was increased the jet plume region decreased and reduced peak TKE.

  13. Sensor array signal processing

    CERN Document Server

    Naidu, Prabhakar S

    2009-01-01

    Chapter One: An Overview of Wavefields 1.1 Types of Wavefields and the Governing Equations 1.2 Wavefield in open space 1.3 Wavefield in bounded space 1.4 Stochastic wavefield 1.5 Multipath propagation 1.6 Propagation through random medium 1.7 ExercisesChapter Two: Sensor Array Systems 2.1 Uniform linear array (ULA) 2.2 Planar array 2.3 Distributed sensor array 2.4 Broadband sensor array 2.5 Source and sensor arrays 2.6 Multi-component sensor array2.7 ExercisesChapter Three: Frequency Wavenumber Processing 3.1 Digital filters in the w-k domain 3.2 Mapping of 1D into 2D filters 3.3 Multichannel Wiener filters 3.4 Wiener filters for ULA and UCA 3.5 Predictive noise cancellation 3.6 Exercises Chapter Four: Source Localization: Frequency Wavenumber Spectrum4.1 Frequency wavenumber spectrum 4.2 Beamformation 4.3 Capon's w-k spectrum 4.4 Maximum entropy w-k spectrum 4.5 Doppler-Azimuth Processing4.6 ExercisesChapter Five: Source Localization: Subspace Methods 5.1 Subspace methods (Narrowband) 5.2 Subspace methods (B...

  14. Analyzing indicators of stream health for Minnesota streams

    Science.gov (United States)

    Singh, U.; Kocian, M.; Wilson, B.; Bolton, A.; Nieber, J.; Vondracek, B.; Perry, J.; Magner, J.

    2005-01-01

    Recent research has emphasized the importance of using physical, chemical, and biological indicators of stream health for diagnosing impaired watersheds and their receiving water bodies. A multidisciplinary team at the University of Minnesota is carrying out research to develop a stream classification system for Total Maximum Daily Load (TMDL) assessment. Funding for this research is provided by the United States Environmental Protection Agency and the Minnesota Pollution Control Agency. One objective of the research study involves investigating the relationships between indicators of stream health and localized stream characteristics. Measured data from Minnesota streams collected by various government and non-government agencies and research institutions have been obtained for the research study. Innovative Geographic Information Systems tools developed by the Environmental Science Research Institute and the University of Texas are being utilized to combine and organize the data. Simple linear relationships between index of biological integrity (IBI) and channel slope, two-year stream flow, and drainage area are presented for the Redwood River and the Snake River Basins. Results suggest that more rigorous techniques are needed to successfully capture trends in IBI scores. Additional analyses will be done using multiple regression, principal component analysis, and clustering techniques. Uncovering key independent variables and understanding how they fit together to influence stream health are critical in the development of a stream classification for TMDL assessment.

  15. ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    I Made Oka Widyantara

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG-DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  16. Relation between Streaming Potential and Streaming Electrification Generated by Streaming of Water through a Sandwich-type Cell

    OpenAIRE

    Maruyama, Kazunori; Nikaido, Mitsuru; Hara, Yoshinori; Tanizaki, Yoshie

    2012-01-01

    Both streaming potential and accumulated charge of water flowed out were measured simultaneously using a sandwich-type cell. The voltages generated in divided sections along flow direction satisfied additivity. The sign of streaming potential agreed with that of streaming electrification. The relation between streaming potential and streaming electrification was explained from a viewpoint of electrical double layer in glass-water interface.

  17. Studies on coaxial circular array for underwater transducer applications

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    of the coaxial array from the next stage of investigation during which a hybrid formulation is developed to provide a computationally efficient method of calculating impedance. Different sidelobe suppression techniques including uniform and nonuniform excitations...

  18. A Streaming Language Implementation of the Discontinuous Galerkin Method

    Science.gov (United States)

    Barth, Timothy; Knight, Timothy

    2005-01-01

    We present a Brook streaming language implementation of the 3-D discontinuous Galerkin method for compressible fluid flow on tetrahedral meshes. Efficient implementation of the discontinuous Galerkin method using the streaming model of computation introduces several algorithmic design challenges. Using a cycle-accurate simulator, performance characteristics have been obtained for the Stanford Merrimac stream processor. The current Merrimac design achieves 128 Gflops per chip and the desktop board is populated with 16 chips yielding a peak performance of 2 Teraflops. Total parts cost for the desktop board is less than $20K. Current cycle-accurate simulations for discretizations of the 3-D compressible flow equations yield approximately 40-50% of the peak performance of the Merrimac streaming processor chip. Ongoing work includes the assessment of the performance of the same algorithm on the 2 Teraflop desktop board with a target goal of achieving 1 Teraflop performance.

  19. SELECTING SAGITTARIUS: IDENTIFICATION AND CHEMICAL CHARACTERIZATION OF THE SAGITTARIUS STREAM

    Energy Technology Data Exchange (ETDEWEB)

    Hyde, E. A. [University of Western Sydney, Locked Bag 1797, Penrith South DC, NSW 1797 (Australia); Keller, S. [Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2601 (Australia); Zucker, D. B. [Macquarie University, Physics and Astronomy, NSW 2109 (Australia); Ibata, R.; Siebert, A. [Observatoire astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l’Université, F-67000 Strasbourg (France); Lewis, G. F.; Conn, A. R. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, NSW 2006 (Australia); Penarrubia, J. [ROE, The University of Edinburgh, Institute for Astronomy, Edinburgh EH9 3HJ (United Kingdom); Irwin, M.; Gilmore, G. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Lane, R. R. [Departamento de Astronomía Universidad de Concepción, Casilla 160 C, Concepción (Chile); Koch, A. [Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, D-69117 Heidelberg (Germany); Diakogiannis, F. I. [International Center for Radio Astronomy Research, University of Western Australia, 35 Stirling Highway, Crawley, WA 6009 (Australia); Martell, S., E-mail: E.Hyde@uws.edu.au [Department of Astrophysics, School of Physics, University of New South Wales, Sydney, NSW 2052 (Australia)

    2015-06-01

    Wrapping around the Milky Way, the Sagittarius stream is the dominant substructure in the halo. Our statistical selection method has allowed us to identify 106 highly likely members of the Sagittarius stream. Spectroscopic analysis of metallicity and kinematics of all members provides us with a new mapping of the Sagittarius stream. We find correspondence between the velocity distribution of stream stars and those computed for a triaxial model of the Milky Way dark matter halo. The Sagittarius trailing arm exhibits a metallicity gradient, ranging from −0.59 to −0.97 dex over 142°. This is consistent with the scenario of tidal disruption from a progenitor dwarf galaxy that possessed an internal metallicity gradient. We note high metallicity dispersion in the leading arm, causing a lack of detectable gradient and possibly indicating orbital phase mixing. We additionally report on a potential detection of the Sextans dwarf spheroidal in our data.

  20. Hydroelectric plant turbine, stream and spillway flow measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lampa, J.; Lemon, D.; Buermans, J. [ASL AQ Flow Inc., Sidney, BC (Canada)

    2004-07-01

    This presentation provided schematics of the turbine flow measurements and typical bulb installations at the Kootenay Canal and Wells hydroelectric power facilities in British Columbia. A typical arrangement for measuring stream flow using acoustic scintillation was also illustrated. Acoustic scintillation is portable, non-intrusive, suitable for short intakes, requires minimal maintenance and is cost effective and accurate. A comparison between current meters and acoustic scintillation was also presented. Stream flow measurement is valuable in evaluating downstream areas that are environmentally important for fish habitat. Stream flow measurement makes it possible to define circulation. The effects of any changes can be assessed by combining field measurements and numerical modelling. The presentation also demonstrated that computational fluid dynamics modelling appears promising in determining stream flow and turbulent flow at spillways. tabs., figs.

  1. Introduction to adaptive arrays

    CERN Document Server

    Monzingo, Bob; Haupt, Randy

    2011-01-01

    This second edition is an extensive modernization of the bestselling introduction to the subject of adaptive array sensor systems. With the number of applications of adaptive array sensor systems growing each year, this look at the principles and fundamental techniques that are critical to these systems is more important than ever before. Introduction to Adaptive Arrays, 2nd Edition is organized as a tutorial, taking the reader by the hand and leading them through the maze of jargon that often surrounds this highly technical subject. It is easy to read and easy to follow as fundamental concept

  2. Piezoelectric transducer array microspeaker

    KAUST Repository

    Carreno, Armando Arpys Arevalo

    2016-12-19

    In this paper we present the fabrication and characterization of a piezoelectric micro-speaker. The speaker is an array of micro-machined piezoelectric membranes, fabricated on silicon wafer using advanced micro-machining techniques. Each array contains 2n piezoelectric transducer membranes, where “n” is the bit number. Every element of the array has a circular shape structure. The membrane is made out four layers: 300nm of platinum for the bottom electrode, 250nm or lead zirconate titanate (PZT), a top electrode of 300nm and a structural layer of 50

  3. streamgap-pepper: Effects of peppering streams with many small impacts

    Science.gov (United States)

    Bovy, Jo; Erkal, Denis; Sanders, Jason

    2017-02-01

    streamgap-pepper computes the effect of subhalo fly-bys on cold tidal streams based on the action-angle representation of streams. A line-of-parallel-angle approach is used to calculate the perturbed distribution function of a given stream segment by undoing the effect of all impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 10^5 Msun, accounting for the stream's internal dispersion and overlapping impacts. This code uses galpy (ascl:1411.008) and the streampepperdf.py galpy extension, which implements the fast calculation of the perturbed stream structure.

  4. Human impacts to mountain streams

    Science.gov (United States)

    Wohl, Ellen

    2006-09-01

    Mountain streams are here defined as channel networks within mountainous regions of the world. This definition encompasses tremendous diversity of physical and biological conditions, as well as history of land use. Human effects on mountain streams may result from activities undertaken within the stream channel that directly alter channel geometry, the dynamics of water and sediment movement, contaminants in the stream, or aquatic and riparian communities. Examples include channelization, construction of grade-control structures or check dams, removal of beavers, and placer mining. Human effects can also result from activities within the watershed that indirectly affect streams by altering the movement of water, sediment, and contaminants into the channel. Deforestation, cropping, grazing, land drainage, and urbanization are among the land uses that indirectly alter stream processes. An overview of the relative intensity of human impacts to mountain streams is provided by a table summarizing human effects on each of the major mountainous regions with respect to five categories: flow regulation, biotic integrity, water pollution, channel alteration, and land use. This table indicates that very few mountains have streams not at least moderately affected by land use. The least affected mountainous regions are those at very high or very low latitudes, although our scientific ignorance of conditions in low-latitude mountains in particular means that streams in these mountains might be more altered than is widely recognized. Four case studies from northern Sweden (arctic region), Colorado Front Range (semiarid temperate region), Swiss Alps (humid temperate region), and Papua New Guinea (humid tropics) are also used to explore in detail the history and effects on rivers of human activities in mountainous regions. The overview and case studies indicate that mountain streams must be managed with particular attention to upstream/downstream connections, hillslope

  5. Doublet III neutral beam multi-stream command language system

    International Nuclear Information System (INIS)

    Campbell, L.; Garcia, J.R.

    1983-12-01

    A multi-stream command language system was developed to provide control of the dual source neutral beam injectors on the Doublet III experiment at GA Technologies Inc. The Neutral Beam command language system consists of three parts: compiler, sequencer, and interactive task. The command language, which was derived from the Doublet III tokamak command language, POPS, is compiled, using a recursive descent compiler, into reverse polish notation instructions which then can be executed by the sequencer task. The interactive task accepts operator commands via a keyboard. The interactive task directs the operation of three input streams, creating commands which are then executed by the sequencer. The streams correspond to the two sources within a Doublet III neutral beam, plus an interactive stream. The sequencer multiplexes the execution of instructions from these three streams. The instructions include reads and writes to an operator terminal, arithmetic computations, intrinsic functions such as CAMAC input and output, and logical instructions. The neutral beam command language system was implemented using Modular Computer Systems (ModComp) Pascal and consists of two tasks running on a ModComp Classic IV computer

  6. Doublet III neutral beam multi-stream command language system

    International Nuclear Information System (INIS)

    Campbell, L.; Garcia, J.R.

    1983-01-01

    A multi-stream command language system was developed to provide control of the dual source neutral beam injectors on the Doublet III experiment at GA Technologies Inc. The Neutral Beam command language system consists of three parts: compiler, sequencer, and interactive task. The command language, which was derived from the Doublet III tokamak command language, POPS, is compiled, using a recursive descent compiler, into reverse polish notation instructions which then can be executed by the sequencer task. The interactive task accepts operator commands via a keyboard. The interactive task directs the operation of three input streams, creating commands which are then executed by the sequencer. The streams correspond to the two sources within a Doublet III neutral beam, plus an interactive stream. The sequencer multiplexes the execution of instructions from these three streams. The instructions include reads and writes to an operator terminal, arithmetic computations, intrinsic functions such as CAMAC input and output, and logical instructions. The neutral beam command language system was implemented using Modular Computer Systems (ModComp) Pascal and consists of two tasks running on a ModComp Classic IV computer. The two tasks, the interactive and the sequencer, run independently and communicate using shared memory regions. The compiler runs as an overlay to the interactive task when so directed by operator commands. The system is succesfully being used to operate the three neutral beams on Doublet III

  7. Capacitive micromachined ultrasonic transducer arrays as tunable acoustic metamaterials

    Energy Technology Data Exchange (ETDEWEB)

    Lani, Shane W., E-mail: shane.w.lani@gmail.com, E-mail: karim.sabra@me.gatech.edu, E-mail: levent.degertekin@me.gatech.edu; Sabra, Karim G. [George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, 801Ferst Drive, Georgia 30332-0405 (United States); Wasequr Rashid, M.; Hasler, Jennifer [School of Electrical and Computer Engineering, Georgia Institute of Technology, Van Leer Electrical Engineering Building, 777 Atlantic Drive NW, Atlanta, Georgia 30332-0250 (United States); Levent Degertekin, F. [George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, 801Ferst Drive, Georgia 30332-0405 (United States); School of Electrical and Computer Engineering, Georgia Institute of Technology, Van Leer Electrical Engineering Building, 777 Atlantic Drive NW, Atlanta, Georgia 30332-0250 (United States)

    2014-02-03

    Capacitive Micromachined Ultrasonic Transducers (CMUTs) operating in immersion support dispersive evanescent waves due to the subwavelength periodic structure of electrostatically actuated membranes in the array. Evanescent wave characteristics also depend on the membrane resonance which is modified by the externally applied bias voltage, offering a mechanism to tune the CMUT array as an acoustic metamaterial. The dispersion and tunability characteristics are examined using a computationally efficient, mutual radiation impedance based approach to model a finite-size array and realistic parameters of variation. The simulations are verified, and tunability is demonstrated by experiments on a linear CMUT array operating in 2-12 MHz range.

  8. Protein Functionalized Nanodiamond Arrays

    Directory of Open Access Journals (Sweden)

    Liu YL

    2010-01-01

    Full Text Available Abstract Various nanoscale elements are currently being explored for bio-applications, such as in bio-images, bio-detection, and bio-sensors. Among them, nanodiamonds possess remarkable features such as low bio-cytotoxicity, good optical property in fluorescent and Raman spectra, and good photostability for bio-applications. In this work, we devise techniques to position functionalized nanodiamonds on self-assembled monolayer (SAMs arrays adsorbed on silicon and ITO substrates surface using electron beam lithography techniques. The nanodiamond arrays were functionalized with lysozyme to target a certain biomolecule or protein specifically. The optical properties of the nanodiamond-protein complex arrays were characterized by a high throughput confocal microscope. The synthesized nanodiamond-lysozyme complex arrays were found to still retain their functionality in interacting with E. coli.

  9. Photonic Crystal Nanocavity Arrays

    National Research Council Canada - National Science Library

    Altug, Hatice; Vuckovic, Jelena

    2006-01-01

    We recently proposed two-dimensional coupled photonic crystal nanocavity arrays as a route to achieve a slow-group velocity of light in all crystal directions, thereby enabling numerous applications...

  10. Industrial-Strength Streaming Video.

    Science.gov (United States)

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  11. What Can Hierarchies Do for Data Streams?

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    Much effort has been put into building data streams management systems for querying data streams. Here, data streams have been viewed as a flow of low-level data items, e.g., sensor readings or IP packet data. Stream query languages have mostly been SQL-based, with the STREAM and TelegraphCQ lang...

  12. Requirements for the GCFR plenum streaming experiment

    International Nuclear Information System (INIS)

    Perkins, R.G.; Rouse, C.A.; Hamilton, C.J.

    1980-09-01

    This report gives the experiment objectives and generic descriptions of experimental configurations for the gas-cooled fast breeder reactor (GCFR) plenum shield experiment. This report defines four experiment phases. Each phase represents a distinct area of uncertainty in computing radiation transport from the GCFR core to the plenums, through the upper and lower plenum shields, and ultimately to the prestressed concrete reactor vessel (PCRV) liner: (1) the shield heterogeneity phase; (2) the exit shield simulation phase; (3) the plenum streaming phase; and (4) the plenum shield simulation phase

  13. Streaming movies, media, and instant access

    CERN Document Server

    Dixon, Wheeler Winston

    2013-01-01

    Film stocks are vanishing, but the iconic images of the silver screen remain -- albeit in new, sleeker formats. Today, viewers can instantly stream movies on televisions, computers, and smartphones. Gone are the days when films could only be seen in theaters or rented at video stores: movies are now accessible at the click of a button, and there are no reels, tapes, or discs to store. Any film or show worth keeping may be collected in the virtual cloud and accessed at will through services like Netflix, Hulu, and Amazon Instant.The movies have changed, and we are changing with them.

  14. Stream Clustering of Growing Objects

    Science.gov (United States)

    Siddiqui, Zaigham Faraz; Spiliopoulou, Myra

    We study incremental clustering of objects that grow and accumulate over time. The objects come from a multi-table stream e.g. streams of Customer and Transaction. As the Transactions stream accumulates, the Customers’ profiles grow. First, we use an incremental propositionalisation to convert the multi-table stream into a single-table stream upon which we apply clustering. For this purpose, we develop an online version of K-Means algorithm that can handle these swelling objects and any new objects that arrive. The algorithm also monitors the quality of the model and performs re-clustering when it deteriorates. We evaluate our method on the PKDD Challenge 1999 dataset.

  15. Evolution and interaction of large interplanetary streams

    International Nuclear Information System (INIS)

    Whang, Y.C.; Burlaga, L.F.

    1985-02-01

    A computer simulation for the evolution and interaction of large interplanetary streams based on multi-spacecraft observations and an unsteady, one-dimensional MHD model is presented. Two events, each observed by two or more spacecraft separated by a distance of the order of 10 AU, were studied. The first simulation is based on the plasma and magnetic field observations made by two radially-aligned spacecraft. The second simulation is based on an event observed first by Helios-1 in May 1980 near 0.6 AU and later by Voyager-1 in June 1980 at 8.1 AU. These examples show that the dynamical evolution of large-scale solar wind structures is dominated by the shock process, including the formation, collision, and merging of shocks. The interaction of shocks with stream structures also causes a drastic decrease in the amplitude of the solar wind speed variation with increasing heliocentric distance, and as a result of interactions there is a large variation of shock-strengths and shock-speeds. The simulation results shed light on the interpretation for the interaction and evolution of large interplanetary streams. Observations were made along a few limited trajectories, but simulation results can supplement these by providing the detailed evolution process for large-scale solar wind structures in the vast region not directly observed. The use of a quantitative nonlinear simulation model including shock merging process is crucial in the interpretation of data obtained in the outer heliosphere

  16. Carbon nanotube array actuators

    International Nuclear Information System (INIS)

    Geier, S; Mahrholz, T; Wierach, P; Sinapius, M

    2013-01-01

    Experimental investigations of highly vertically aligned carbon nanotubes (CNTs), also known as CNT-arrays, are the main focus of this paper. The free strain as result of an active material behavior is analyzed via a novel experimental setup. Previous test experiences of papers made of randomly oriented CNTs, also called Bucky-papers, reveal comparably low free strain. The anisotropy of aligned CNTs promises better performance. Via synthesis techniques like chemical vapor deposition (CVD) or plasma enhanced CVD (PECVD), highly aligned arrays of multi-walled carbon nanotubes (MWCNTs) are synthesized. Two different types of CNT-arrays are analyzed, morphologically first, and optically tested for their active characteristics afterwards. One type of the analyzed arrays features tube lengths of 750–2000 μm with a large variety of diameters between 20 and 50 nm and a wave-like CNT-shape. The second type features a maximum, almost uniform, length of 12 μm and a constant diameter of 50 nm. Different CNT-lengths and array types are tested due to their active behavior. As result of the presented tests, it is reported that the quality of orientation is the most decisive property for excellent active behavior. Due to their alignment, CNT-arrays feature the opportunity to clarify the actuation mechanism of architectures made of CNTs. (paper)

  17. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    Science.gov (United States)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-04-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  18. Low-frequency synthesis array in earth orbit

    International Nuclear Information System (INIS)

    Jones, D.L.; Preston, R.A.; Kuiper, T.B.H.

    1987-01-01

    The scientific objectives and design concept of a space-based VLBI array for high-resolution astronomical observations at 1-30 MHz are discussed. The types of investigations calling for such an array include radio spectroscopy of individual objects, measurement of the effects of scattering and refraction by the interplanetary medium (IPM) and the ISM, mapping the distribution of low-energy cosmic-ray electrons, and determining the extent of the Galactic halo. Consideration is given to the limitations imposed on an LF VLBI array by the ionosphere, the IPM, and the ISM; the calibration advantages offered by circular polar orbits of slightly differing ascending-node longitude for the array satellites; and collection of the IF data streams from the array satellites by one master satellite prior to transmission to the ground. It is shown that determination of the three-dimensional array geometry by means of intersatellite radio links is feasible if there are at least seven spacecraft in the array

  19. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  20. Parallel Access of Out-Of-Core Dense Extendible Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Rotem, Doron

    2007-07-26

    Datasets used in scientific and engineering applications are often modeled as dense multi-dimensional arrays. For very large datasets, the corresponding array models are typically stored out-of-core as array files. The array elements are mapped onto linear consecutive locations that correspond to the linear ordering of the multi-dimensional indices. Two conventional mappings used are the row-major order and the column-major order of multi-dimensional arrays. Such conventional mappings of dense array files highly limit the performance of applications and the extendibility of the dataset. Firstly, an array file that is organized in say row-major order causes applications that subsequently access the data in column-major order, to have abysmal performance. Secondly, any subsequent expansion of the array file is limited to only one dimension. Expansions of such out-of-core conventional arrays along arbitrary dimensions, require storage reorganization that can be very expensive. Wepresent a solution for storing out-of-core dense extendible arrays that resolve the two limitations. The method uses a mapping function F*(), together with information maintained in axial vectors, to compute the linear address of an extendible array element when passed its k-dimensional index. We also give the inverse function, F-1*() for deriving the k-dimensional index when given the linear address. We show how the mapping function, in combination with MPI-IO and a parallel file system, allows for the growth of the extendible array without reorganization and no significant performance degradation of applications accessing elements in any desired order. We give methods for reading and writing sub-arrays into and out of parallel applications that run on a cluster of workstations. The axial-vectors are replicated and maintained in each node that accesses sub-array elements.

  1. Percent Forest Adjacent to Streams (Future)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  2. Stream Habitat Reach Summary - NCWAP [ds158

    Data.gov (United States)

    California Natural Resource Agency — The Stream Habitat - NCWAP - Reach Summary [ds158] shapefile contains in-stream habitat survey data summarized to the stream reach level. It is a derivative of the...

  3. Percent Agriculture Adjacent to Streams (Future)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  4. Sensitivity analysis of a pulse nutrient addition technique for estimating nutrient uptake in large streams

    Science.gov (United States)

    Laurence Lin; J.R. Webster

    2012-01-01

    The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...

  5. Optimal array factor radiation pattern synthesis for linear antenna array using cat swarm optimization: validation by an electromagnetic simulator

    Institute of Scientific and Technical Information of China (English)

    Gopi RAM; Durbadal MANDAL; Sakti Prasad GHOSHAL; Rajib KAR

    2017-01-01

    In this paper, an optimal design of linear antenna arrays having microstrip patch antenna elements has been carried out. Cat swarm optimization (CSO) has been applied for the optimization of the control parameters of radiation pattern of an antenna array. The optimal radiation patterns of isotropic antenna elements are obtained by optimizing the current excitation weight of each element and the inter-element spacing. The antenna arrays of 12, 16, and 20 elements are taken as examples. The arrays are de-signed by using MATLAB computation and are validated through Computer Simulation Technology-Microwave Studio (CST-MWS). From the simulation results it is evident that CSO is able to yield the optimal design of linear antenna arrays of patch antenna elements.

  6. Bearing estimation with acoustic vector-sensor arrays

    International Nuclear Information System (INIS)

    Hawkes, M.; Nehorai, A.

    1996-01-01

    We consider direction-of-arrival (DOA) estimation using arrays of acoustic vector sensors in free space, and derive expressions for the Cramacute er-Rao bound on the DOA parameters when there is a single source. The vector-sensor array is seen to have improved performance over the traditional scalar-sensor (pressure-sensor) array for two distinct reasons: its elements have an inherent directional sensitivity and the array makes a greater number of measurements. The improvement is greatest for small array apertures and low signal-to-noise ratios. Examination of the conventional beamforming and Capon DOA estimators shows that vector-sensor arrays can completely resolve the bearing, even with a linear array, and can remove the ambiguities associated with spatial undersampling. We also propose and analyze a diversely-oriented array of velocity sensors that possesses some of the advantages of the vector-sensor array without the increase in hardware and computation. In addition, in certain scenarios it can avoid problems with spatially correlated noise that the vector-sensor array may suffer. copyright 1996 American Institute of Physics

  7. Spring 5 & reactive streams

    CERN Multimedia

    CERN. Geneva; Clozel, Brian

    2017-01-01

    Spring is a framework widely used by the world-wide Java community, and it is also extensively used at CERN. The accelerator control system is constituted of 10 million lines of Java code, spread across more than 1000 projects (jars) developed by 160 software engineers. Around half of this (all server-side Java code) is based on the Spring framework. Warning: the speakers will assume that people attending the seminar are familiar with Java and Spring’s basic concepts. Spring 5.0 and Spring Boot 2.0 updates (45 min) This talk will cover the big ticket items in the 5.0 release of Spring (including Kotlin support, @Nullable and JDK9) and provide an update on Spring Boot 2.0, which is scheduled for the end of the year. Reactive Spring (1h) Spring Framework 5.0 has been released - and it now supports reactive applications in the Spring ecosystem. During this presentation, we'll talk about the reactive foundations of Spring Framework with the Reactor project and the reactive streams specification. We'll al...

  8. Petascale Computational Systems

    OpenAIRE

    Bell, Gordon; Gray, Jim; Szalay, Alex

    2007-01-01

    Computational science is changing to be data intensive. Super-Computers must be balanced systems; not just CPU farms but also petascale IO and networking arrays. Anyone building CyberInfrastructure should allocate resources to support a balanced Tier-1 through Tier-3 design.

  9. New Potentiometric Wireless Chloride Sensors Provide High Resolution Information on Chemical Transport Processes in Streams

    Directory of Open Access Journals (Sweden)

    Keith Smettem

    2017-07-01

    Full Text Available Quantifying the travel times, pathways, and dispersion of solutes moving through stream environments is critical for understanding the biogeochemical cycling processes that control ecosystem functioning. Validation of stream solute transport and exchange process models requires data obtained from in-stream measurement of chemical concentration changes through time. This can be expensive and time consuming, leading to a need for cheap distributed sensor arrays that respond instantly and record chemical transport at points of interest on timescales of seconds. To meet this need we apply new, low-cost (in the order of a euro per sensor potentiometric chloride sensors used in a distributed array to obtain data with high spatial and temporal resolution. The application here is to monitoring in-stream hydrodynamic transport and dispersive mixing of an injected chemical, in this case NaCl. We present data obtained from the distributed sensor array under baseflow conditions for stream reaches in Luxembourg and Western Australia. The reaches were selected to provide a range of increasingly complex in-channel flow patterns. Mid-channel sensor results are comparable to data obtained from more expensive electrical conductivity meters, but simultaneous acquisition of tracer data at several positions across the channel allows far greater spatial resolution of hydrodynamic mixing processes and identification of chemical ‘dead zones’ in the study reaches.

  10. Plasma dynamics in aluminium wire array Z-pinch implosions

    International Nuclear Information System (INIS)

    Bland, S.N.

    2001-01-01

    The wire array Z-pinch is the world's most powerful laboratory X-ray source. An achieved power of ∼280TW has generated great interest in the use of these devices as a source of hohlraum heating for inertial confinement fusion experiments. However, the physics underlying how wire array Z-pinches implode is not well understood. This thesis presents the first detailed measurements of plasma dynamics in wire array experiments. The MAGPIE generator, with currents of up to 1.4MA, 150ns 10-90% rise-time, was used to implode arrays of 16mm diameter typically containing between 8 and 64 15μm aluminium wires. Diagnostics included: end and side-on laser probing with interferometry, schlieren and shadowgraphy channels; radial and axial streak photography; gated X-ray imaging; XUV and hard X-ray spectrometry; filtered XRDs and diamond PCDs; and a novel X-ray backlighting system to probe high density plasma. It was found that the plasma formed from the wires consisted of cold, dense cores, which ablated producing hot, low density coronal plasma. After an initial acceleration around the cores, coronal plasma streams flowed force-free towards the axis, with an instability wavelength determined by the core size. At ∼50% of the implosion time, the streams collided on axis forming a precursor plasma which appeared to be uniform, stable, and inertially confined. The existence of core-corona structure significantly affected implosion dynamics. For arrays with <64 wires, the wire cores remained in their original positions until ∼80% of the implosion time before accelerating rapidly. At 64 wires a transition in implosion trajectories to 0-D like occurred indicating a possible merger of current carrying plasma close to the cores - the cores themselves did not merge. During implosion, the cores initially developed uncorrelated instabilities that then transformed into a longer wavelength global mode of instability. The study of nested arrays (2 concentric arrays, one inside the other

  11. Waste streams from reprocessing operations

    International Nuclear Information System (INIS)

    Andersson, B.; Ericsson, A.-M.

    1978-03-01

    The three main products from reprocessing operations are uranium, plutonium and vitrified high-level-waste. The purpose of this report is to identify and quantify additional waste streams containing radioactive isotops. Special emphasis is laid on Sr, Cs and the actinides. The main part, more than 99 % of both the fission-products and the transuranic elements are contained in the HLW-stream. Small quantities sometimes contaminate the U- and Pu-streams and the rest is found in the medium-level-waste

  12. An Association-Oriented Partitioning Approach for Streaming Graph Query

    Directory of Open Access Journals (Sweden)

    Yun Hao

    2017-01-01

    Full Text Available The volumes of real-world graphs like knowledge graph are increasing rapidly, which makes streaming graph processing a hot research area. Processing graphs in streaming setting poses significant challenges from different perspectives, among which graph partitioning method plays a key role. Regarding graph query, a well-designed partitioning method is essential for achieving better performance. Existing offline graph partitioning methods often require full knowledge of the graph, which is not possible during streaming graph processing. In order to handle this problem, we propose an association-oriented streaming graph partitioning method named Assc. This approach first computes the rank values of vertices with a hybrid approximate PageRank algorithm. After splitting these vertices with an adapted variant affinity propagation algorithm, the process order on vertices in the sliding window can be determined. Finally, according to the level of these vertices and their association, the partition where the vertices should be distributed is decided. We compare its performance with a set of streaming graph partition methods and METIS, a widely adopted offline approach. The results show that our solution can partition graphs with hundreds of millions of vertices in streaming setting on a large collection of graph datasets and our approach outperforms other graph partitioning methods.

  13. Determination of the self purification of streams using tracers

    International Nuclear Information System (INIS)

    Salviano, J.S.

    1982-04-01

    A methodology for the 'in situ' evaluation of the self purification of streams is discussed. It consists of the simultaneous injection of two tracers into the stream. One of the tracers is oxidized by biochemical processes. It can be either artificially supplied to the stream or a naturally present component can be used. This tracer is used for the determination of the self purification parameters. The other tracer is conservative and allows for the hydrodynamic effects. Tests have been carried out in two streams with quite different hydrodynamic and physicochemical conditions. In the first stream, with a flow-rate of about 0.9 m 3 /s, urea was used as the nonconservative tracer. In the other stream, which had a flow-rate of about 5 m 3 /s, only a radioactive tracer has been used, and the rate of biochemical oxidation has been determined from BOD measurements. Calculations have been implemented on a digital computer. In both cases it was found that the reoxygenation rate is more conveniently determined by empirical formulas. Results from both tests have been deemed realistic by comparison with similar experiments. (Author) [pt

  14. An adjustable linear Halbach array

    Energy Technology Data Exchange (ETDEWEB)

    Hilton, J.E., E-mail: James.Hilton@csiro.au [CSIRO Mathematics, Informatics and Statistics, Clayton South, VIC 3169 (Australia); McMurry, S.M. [School of Physics, Trinity College, Dublin (Ireland)

    2012-07-15

    The linear Halbach array is a well-known planar magnetic structure capable, in the idealized case, of generating a one-sided magnetic field. We show that such a field can be created from an array of uniformly magnetized rods, and rotating these rods in an alternating fashion can smoothly transfer the resultant magnetic field through the plane of the device. We examine an idealized model composed of infinite line dipoles and carry out computational simulations on a realizable device using a magnetic boundary element method. Such an arrangement can be used for an efficient latching device, or to produce a highly tunable field in the space above the device. - Highlights: Black-Right-Pointing-Pointer We model an adjustable 'one-sided' flux sheet made up of a series of dipolar magnetic field sources. Black-Right-Pointing-Pointer We show that magnetic field can be switched from one side of sheet to other by a swap rotation of each of magnetic sources. Black-Right-Pointing-Pointer Investigations show that such an arrangement is practical and can easily be fabricated. Black-Right-Pointing-Pointer The design has a wide range of potential applications.

  15. An adjustable linear Halbach array

    International Nuclear Information System (INIS)

    Hilton, J.E.; McMurry, S.M.

    2012-01-01

    The linear Halbach array is a well-known planar magnetic structure capable, in the idealized case, of generating a one-sided magnetic field. We show that such a field can be created from an array of uniformly magnetized rods, and rotating these rods in an alternating fashion can smoothly transfer the resultant magnetic field through the plane of the device. We examine an idealized model composed of infinite line dipoles and carry out computational simulations on a realizable device using a magnetic boundary element method. Such an arrangement can be used for an efficient latching device, or to produce a highly tunable field in the space above the device. - Highlights: ► We model an adjustable ‘one-sided’ flux sheet made up of a series of dipolar magnetic field sources. ► We show that magnetic field can be switched from one side of sheet to other by a swap rotation of each of magnetic sources. ► Investigations show that such an arrangement is practical and can easily be fabricated. ► The design has a wide range of potential applications.

  16. StreamStats in Oklahoma - Drainage-Basin Characteristics and Peak-Flow Frequency Statistics for Ungaged Streams

    Science.gov (United States)

    Smith, S. Jerrod; Esralew, Rachel A.

    2010-01-01

    The USGS Streamflow Statistics (StreamStats) Program was created to make geographic information systems-based estimation of streamflow statistics easier, faster, and more consistent than previously used manual techniques. The StreamStats user interface is a map-based internet application that allows users to easily obtain streamflow statistics, basin characteristics, and other information for user-selected U.S. Geological Survey data-collection stations and ungaged sites of interest. The application relies on the data collected at U.S. Geological Survey streamflow-gaging stations, computer aided computations of drainage-basin characteristics, and published regression equations for several geographic regions comprising the United States. The StreamStats application interface allows the user to (1) obtain information on features in selected map layers, (2) delineate drainage basins for ungaged sites, (3) download drainage-basin polygons to a shapefile, (4) compute selected basin characteristics for delineated drainage basins, (5) estimate selected streamflow statistics for ungaged points on a stream, (6) print map views, (7) retrieve information for U.S. Geological Survey streamflow-gaging stations, and (8) get help on using StreamStats. StreamStats was designed for national application, with each state, territory, or group of states responsible for creating unique geospatial datasets and regression equations to compute selected streamflow statistics. With the cooperation of the Oklahoma Department of Transportation, StreamStats has been implemented for Oklahoma and is available at http://water.usgs.gov/osw/streamstats/. The Oklahoma StreamStats application covers 69 processed hydrologic units and most of the state of Oklahoma. Basin characteristics available for computation include contributing drainage area, contributing drainage area that is unregulated by Natural Resources Conservation Service floodwater retarding structures, mean-annual precipitation at the

  17. A Distributed Flocking Approach for Information Stream Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL

    2006-01-01

    Intelligence analysts are currently overwhelmed with the amount of information streams generated everyday. There is a lack of comprehensive tool that can real-time analyze the information streams. Document clustering analysis plays an important role in improving the accuracy of information retrieval. However, most clustering technologies can only be applied for analyzing the static document collection because they normally require a large amount of computation resource and long time to get accurate result. It is very difficult to cluster a dynamic changed text information streams on an individual computer. Our early research has resulted in a dynamic reactive flock clustering algorithm which can continually refine the clustering result and quickly react to the change of document contents. This character makes the algorithm suitable for cluster analyzing dynamic changed document information, such as text information stream. Because of the decentralized character of this algorithm, a distributed approach is a very natural way to increase the clustering speed of the algorithm. In this paper, we present a distributed multi-agent flocking approach for the text information stream clustering and discuss the decentralized architectures and communication schemes for load balance and status information synchronization in this approach.

  18. A Multi-Time Scale Morphable Software Milieu for Polymorphous Computing Architectures (PCA) - Composable, Scalable Systems

    National Research Council Canada - National Science Library

    Skjellum, Anthony

    2004-01-01

    Polymorphous Computing Architectures (PCA) rapidly "morph" (reorganize) software and hardware configurations in order to achieve high performance on computation styles ranging from specialized streaming to general threaded applications...

  19. Array abstractions for GPU programming

    DEFF Research Database (Denmark)

    Dybdal, Martin

    The shift towards massively parallel hardware platforms for highperformance computing tasks has introduced a need for improved programming models that facilitate ease of reasoning for both users and compiler optimization. A promising direction is the field of functional data-parallel programming......, for which functional invariants can be utilized by optimizing compilers to perform large program transformations automatically. However, the previous work in this area allow users only limited ability to reason about the performance of algorithms. For this reason, such languages have yet to see wide...... industrial adoption. We present two programming languages that attempt at both supporting industrial applications and providing reasoning tools for hierarchical data-parallel architectures, such as GPUs. First, we present TAIL, an array based intermediate language and compiler framework for compiling a large...

  20. Re-Meandering of Lowland Streams

    DEFF Research Database (Denmark)

    Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai

    2014-01-01

    We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats...... and macroinvertebrate communities of restored streams would resemble those of natural streams, while those of the channelized streams would differ from both restored and near-natural streams. Physical habitats were surveyed for substrate composition, depth, width and current velocity. Macroinvertebrates were sampled...... along 100 m reaches in each stream, in edge habitats and in riffle/run habitats located in the center of the stream. Restoration significantly altered the physical conditions and affected the interactions between stream habitat heterogeneity and macroinvertebrate diversity. The substrate in the restored...

  1. ATLAS Live: Collaborative Information Streams

    Energy Technology Data Exchange (ETDEWEB)

    Goldfarb, Steven [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Collaboration: ATLAS Collaboration

    2011-12-23

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  2. ATLAS Live: Collaborative Information Streams

    International Nuclear Information System (INIS)

    Goldfarb, Steven

    2011-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  3. ATLAS Live: Collaborative Information Streams

    CERN Document Server

    Goldfarb, S; The ATLAS collaboration

    2011-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at th...

  4. STREAMS - Technology Programme. Yearbook 2003

    International Nuclear Information System (INIS)

    2003-01-01

    The STREAMS Technology Programme addresses municipal waste. Municipal waste is composed of waste from households and small businesses. The programme focuses on five areas Waste prevention, Collection, transportation, and management of waste streams, Waste treatment technologies, Waste recycling into raw materials and new products, Landfill technologies. The development projects of the STREAMS Programme utilize a number of different technologies, such as biotechnology, information technology, materials technology, measurement and analysis, and automation technology. Finnish expertise in materials recycling technologies and related electronics and information technology is extremely high on a worldwide scale even though the companies represent SMEs. Started in 2001, the STREAMS programme has a total volume of 27 million euros, half of which is funded by Tekes. The programme runs through the end of 2004. (author)

  5. On-stream analysis systems

    International Nuclear Information System (INIS)

    Howarth, W.J.; Watt, J.S.

    1982-01-01

    An outline of some commercially available on-stream analysis systems in given. Systems based on x-ray tube/crystal spectrometers, scintillation detectors, proportional detectors and solid-state detectors are discussed

  6. Wire Array Photovoltaics

    Science.gov (United States)

    Turner-Evans, Dan

    Over the past five years, the cost of solar panels has dropped drastically and, in concert, the number of installed modules has risen exponentially. However, solar electricity is still more than twice as expensive as electricity from a natural gas plant. Fortunately, wire array solar cells have emerged as a promising technology for further lowering the cost of solar. Si wire array solar cells are formed with a unique, low cost growth method and use 100 times less material than conventional Si cells. The wires can be embedded in a transparent, flexible polymer to create a free-standing array that can be rolled up for easy installation in a variety of form factors. Furthermore, by incorporating multijunctions into the wire morphology, higher efficiencies can be achieved while taking advantage of the unique defect relaxation pathways afforded by the 3D wire geometry. The work in this thesis shepherded Si wires from undoped arrays to flexible, functional large area devices and laid the groundwork for multijunction wire array cells. Fabrication techniques were developed to turn intrinsic Si wires into full p-n junctions and the wires were passivated with a-Si:H and a-SiNx:H. Single wire devices yielded open circuit voltages of 600 mV and efficiencies of 9%. The arrays were then embedded in a polymer and contacted with a transparent, flexible, Ni nanoparticle and Ag nanowire top contact. The contact connected >99% of the wires in parallel and yielded flexible, substrate free solar cells featuring hundreds of thousands of wires. Building on the success of the Si wire arrays, GaP was epitaxially grown on the material to create heterostructures for photoelectrochemistry. These cells were limited by low absorption in the GaP due to its indirect bandgap, and poor current collection due to a diffusion length of only 80 nm. However, GaAsP on SiGe offers a superior combination of materials, and wire architectures based on these semiconductors were investigated for multijunction

  7. Self-adaptive change detection in streaming data with non-stationary distribution

    KAUST Repository

    Zhang, Xiangliang; Wang, Wei

    2010-01-01

    Non-stationary distribution, in which the data distribution evolves over time, is a common issue in many application fields, e.g., intrusion detection and grid computing. Detecting the changes in massive streaming data with a non

  8. Short-term stream flow forecasting at Australian river sites using data-driven regression techniques

    CSIR Research Space (South Africa)

    Steyn, Melise

    2017-09-01

    Full Text Available This study proposes a computationally efficient solution to stream flow forecasting for river basins where historical time series data are available. Two data-driven modeling techniques are investigated, namely support vector regression...

  9. Data Stream Mining

    Science.gov (United States)

    Gaber, Mohamed Medhat; Zaslavsky, Arkady; Krishnaswamy, Shonali

    Data mining is concerned with the process of computationally extracting hidden knowledge structures represented in models and patterns from large data repositories. It is an interdisciplinary field of study that has its roots in databases, statistics, machine learning, and data visualization. Data mining has emerged as a direct outcome of the data explosion that resulted from the success in database and data warehousing technologies over the past two decades (Fayyad, 1997,Fayyad, 1998,Kantardzic, 2003).

  10. rasdaman Array Database: current status

    Science.gov (United States)

    Merticariu, George; Toader, Alexandru

    2015-04-01

    rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which

  11. nitrogen saturation in stream ecosystems

    OpenAIRE

    Earl, S. R.; Valett, H. M.; Webster, J. R.

    2006-01-01

    The concept of nitrogen (N) saturation has organized the assessment of N loading in terrestrial ecosystems. Here we extend the concept to lotic ecosystems by coupling Michaelis-Menten kinetics and nutrient spiraling. We propose a series of saturation response types, which may be used to characterize the proximity of streams to N saturation. We conducted a series of short-term N releases using a tracer ((NO3)-N-15-N) to measure uptake. Experiments were conducted in streams spanning a gradient ...

  12. Bridging Scales: A Model-Based Assessment of the Technical Tidal-Stream Energy Resource off Massachusetts, USA

    Science.gov (United States)

    Cowles, G. W.; Hakim, A.; Churchill, J. H.

    2016-02-01

    Tidal in-stream energy conversion (TISEC) facilities provide a highly predictable and dependable source of energy. Given the economic and social incentives to migrate towards renewable energy sources there has been tremendous interest in the technology. Key challenges to the design process stem from the wide range of problem scales extending from device to array. In the present approach we apply a multi-model approach to bridge the scales of interest and select optimal device geometries to estimate the technical resource for several realistic sites in the coastal waters of Massachusetts, USA. The approach links two computational models. To establish flow conditions at site scales ( 10m), a barotropic setup of the unstructured grid ocean model FVCOM is employed. The model is validated using shipboard and fixed ADCP as well as pressure data. For device scale, the structured multiblock flow solver SUmb is selected. A large ensemble of simulations of 2D cross-flow tidal turbines is used to construct a surrogate design model. The surrogate model is then queried using velocity profiles extracted from the tidal model to determine the optimal geometry for the conditions at each site. After device selection, the annual technical yield of the array is evaluated with FVCOM using a linear momentum actuator disk approach to model the turbines. Results for several key Massachusetts sites including comparison with theoretical approaches will be presented.

  13. Toward Design Guidelines for Stream Restoration Structures: Measuring and Modeling Unsteady Turbulent Flows in Natural Streams with Complex Hydraulic Structures

    Science.gov (United States)

    Lightbody, A.; Sotiropoulos, F.; Kang, S.; Diplas, P.

    2009-12-01

    Despite their widespread application to prevent lateral river migration, stabilize banks, and promote aquatic habitat, shallow transverse flow training structures such as rock vanes and stream barbs lack quantitative design guidelines. Due to the lack of fundamental knowledge about the interaction of the flow field with the sediment bed, existing engineering standards are typically based on various subjective criteria or on cross-sectionally-averaged shear stresses rather than local values. Here, we examine the performance and stability of in-stream structures within a field-scale single-threaded sand-bed meandering stream channel in the newly developed Outdoor StreamLab (OSL) at the St. Anthony Falls Laboratory (SAFL). Before and after the installation of a rock vane along the outer bank of the middle meander bend, high-resolution topography data were obtained for the entire 50-m-long reach at 1-cm spatial scale in the horizontal and sub-millimeter spatial scale in the vertical. In addition, detailed measurements of flow and turbulence were obtained using acoustic Doppler velocimetry at twelve cross-sections focused on the vicinity of the structure. Measurements were repeated at a range of extreme events, including in-bank flows with an approximate flow rate of 44 L/s (1.4 cfs) and bankfull floods with an approximate flow rate of 280 L/s (10 cfs). Under both flow rates, the structure reduced near-bank shear stresses and resulted in both a deeper thalweg and near-bank aggradation. The resulting comprehensive dataset has been used to validate a large eddy simulation carried out by SAFL’s computational fluid dynamics model, the Virtual StreamLab (VSL). This versatile computational framework is able to efficiently simulate 3D unsteady turbulent flows in natural streams with complex in-stream structures and as a result holds promise for the development of much-needed quantitative design guidelines.

  14. The Northeast Stream Quality Assessment

    Science.gov (United States)

    Van Metre, Peter C.; Riva-Murray, Karen; Coles, James F.

    2016-04-22

    In 2016, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) is assessing stream quality in the northeastern United States. The goal of the Northeast Stream Quality Assessment (NESQA) is to assess the quality of streams in the region by characterizing multiple water-quality factors that are stressors to aquatic life and evaluating the relation between these stressors and biological communities. The focus of NESQA in 2016 will be on the effects of urbanization and agriculture on stream quality in all or parts of eight states: Connecticut, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.Findings will provide the public and policymakers with information about the most critical factors affecting stream quality, thus providing insights about possible approaches to protect the health of streams in the region. The NESQA study will be the fourth regional study conducted as part of NAWQA and will be of similar design and scope to the first three, in the Midwest in 2013, the Southeast in 2014, and the Pacific Northwest in 2015 (http://txpub.usgs.gov/RSQA/).

  15. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  16. A review of array radars

    Science.gov (United States)

    Brookner, E.

    1981-10-01

    Achievements in the area of array radars are illustrated by such activities as the operational deployment of the large high-power, high-range-resolution Cobra Dane; the operational deployment of two all-solid-state high-power, large UHF Pave Paws radars; and the development of the SAM multifunction Patriot radar. This paper reviews the following topics: array radars steered in azimuth and elevation by phase shifting (phase-phase steered arrays); arrays steered + or - 60 deg, limited scan arrays, hemispherical coverage, and omnidirectional coverage arrays; array radars steering electronically in only one dimension, either by frequency or by phase steering; and array radar antennas which use no electronic scanning but instead use array antennas for achieving low antenna sidelobes.

  17. Manycore Performance-Portability: Kokkos Multidimensional Array Library

    Directory of Open Access Journals (Sweden)

    H. Carter Edwards

    2012-01-01

    Full Text Available Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs, and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1 manycore compute devices each with its own memory space, (2 data parallel kernels and (3 multidimensional arrays. Kernel execution performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1 separating data access patterns from computational kernels through a multidimensional array API and (2 introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].

  18. Interactive real-time media streaming with reliable communication

    Science.gov (United States)

    Pan, Xunyu; Free, Kevin M.

    2014-02-01

    Streaming media is a recent technique for delivering multimedia information from a source provider to an end- user over the Internet. The major advantage of this technique is that the media player can start playing a multimedia file even before the entire file is transmitted. Most streaming media applications are currently implemented based on the client-server architecture, where a server system hosts the media file and a client system connects to this server system to download the file. Although the client-server architecture is successful in many situations, it may not be ideal to rely on such a system to provide the streaming service as users may be required to register an account using personal information in order to use the service. This is troublesome if a user wishes to watch a movie simultaneously while interacting with a friend in another part of the world over the Internet. In this paper, we describe a new real-time media streaming application implemented on a peer-to-peer (P2P) architecture in order to overcome these challenges within a mobile environment. When using the peer-to-peer architecture, streaming media is shared directly between end-users, called peers, with minimal or no reliance on a dedicated server. Based on the proposed software pɛvμa (pronounced [revma]), named for the Greek word meaning stream, we can host a media file on any computer and directly stream it to a connected partner. To accomplish this, pɛvμa utilizes the Microsoft .NET Framework and Windows Presentation Framework, which are widely available on various types of windows-compatible personal computers and mobile devices. With specially designed multi-threaded algorithms, the application can stream HD video at speeds upwards of 20 Mbps using the User Datagram Protocol (UDP). Streaming and playback are handled using synchronized threads that communicate with one another once a connection is established. Alteration of playback, such as pausing playback or tracking to a

  19. Improved streaming analysis technique: spherical harmonics expansion of albedo data

    International Nuclear Information System (INIS)

    Albert, T.E.; Simmons, G.L.

    1979-01-01

    An improved albedo scattering technique was implemented with a three-dimensional Monte Carlo transport code for use in analyzing radiation streaming problems. The improvement was based on a shifted spherical Harmonics expansion of the doubly differential albedo data base. The result of the improvement was a factor of 3 to 10 reduction in data storage requirements and approximately a factor of 3 to 6 increase in computational speed. Comparisons of results obtained using the technique with measurements are shown for neutron streaming in one- and two-legged square concrete ducts

  20. Dense Array Optimization of Cross-Flow Turbines

    Science.gov (United States)

    Scherl, Isabel; Strom, Benjamin; Brunton, Steven; Polagye, Brian

    2017-11-01

    Cross-flow turbines, where the axis of rotation is perpendicular to the freestream flow, can be used to convert the kinetic energy in wind or water currents to electrical power. By taking advantage of mean and time-resolved wake structures, the optimal density of an array of cross-flow turbines has the potential for higher power output per unit area of land or sea-floor than an equivalent array of axial-flow turbines. In addition, dense arrays in tidal or river channels may be able to further elevate efficiency by exploiting flow confinement and surface proximity. In this work, a two-turbine array is optimized experimentally in a recirculating water channel. The spacing between turbines, as well as individual and coordinated turbine control strategies are optimized. Array efficiency is found to exceed the maximum efficiency for a sparse array (i.e., no interaction between turbines) for stream-wise rotor spacing of less than two diameters. Results are discussed in the context of wake measurements made behind a single rotor.

  1. Diversity of acoustic streaming in a rectangular acoustofluidic field.

    Science.gov (United States)

    Tang, Qiang; Hu, Junhui

    2015-04-01

    Diversity of acoustic streaming field in a 2D rectangular chamber with a traveling wave and using water as the acoustic medium is numerically investigated by the finite element method. It is found that the working frequency, the vibration excitation source length, and the distance and phase difference between two separated symmetric vibration excitation sources can cause the diversity in the acoustic streaming pattern. It is also found that a small object in the acoustic field results in an additional eddy, and affects the eddy size in the acoustic streaming field. In addition, the computation results show that with an increase of the acoustic medium's temperature, the speed of the main acoustic streaming decreases first and then increases, and the angular velocity of the corner eddies increases monotonously, which can be clearly explained by the change of the acoustic dissipation factor and shearing viscosity of the acoustic medium with temperature. Commercialized FEM software COMSOL Multiphysics is used to implement the computation tasks, which makes our method very easy to use. And the computation method is partially verified by an established analytical solution. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Detector array and method

    International Nuclear Information System (INIS)

    Timothy, J.G.; Bybee, R.L.

    1978-01-01

    A detector array and method are described in which sets of electrode elements are provided. Each set consists of a number of linear extending parallel electrodes. The sets of electrode elements are disposed at an angle (preferably orthogonal) with respect to one another so that the individual elements intersect and overlap individual elements of the other sets. Electrical insulation is provided between the overlapping elements. The detector array is exposed to a source of charged particles which in accordance with one embodiment comprise electrons derived from a microchannel array plate exposed to photons. Amplifier and discriminator means are provided for each individual electrode element. Detection means are provided to sense pulses on individual electrode elements in the sets, with coincidence of pulses on individual intersecting electrode elements being indicative of charged particle impact at the intersection of the elements. Electronic readout means provide an indication of coincident events and the location where the charged particle or particles impacted. Display means are provided for generating appropriate displays representative of the intensity and locaton of charged particles impacting on the detector array

  3. Diode lasers and arrays

    International Nuclear Information System (INIS)

    Streifer, W.

    1988-01-01

    This paper discusses the principles of operation of III-V semiconductor diode lasers, the use of distributed feedback, and high power laser arrays. The semiconductor laser is a robust, miniature, versatile device, which directly converts electricity to light with very high efficiency. Applications to pumping solid-state lasers and to fiber optic and point-to-point communications are reviewed

  4. Array Theory and Nial

    DEFF Research Database (Denmark)

    Falster, Peter; Jenkins, Michael

    1999-01-01

    This report is the result of collaboration between the authors during the first 8 months of 1999 when M. Jenkins was visiting professor at DTU. The report documents the development of a tool for the investigation of array theory concepts and in particular presents various approaches to choose...

  5. Piezoelectric transducer array microspeaker

    KAUST Repository

    Carreno, Armando Arpys Arevalo; Conchouso Gonzalez, David; Castro, David; Kosel, Jü rgen; Foulds, Ian G.

    2016-01-01

    contains 2n piezoelectric transducer membranes, where “n” is the bit number. Every element of the array has a circular shape structure. The membrane is made out four layers: 300nm of platinum for the bottom electrode, 250nm or lead zirconate titanate (PZT

  6. Review on Computational Electromagnetics

    Directory of Open Access Journals (Sweden)

    P. Sumithra

    2017-03-01

    Full Text Available Computational electromagnetics (CEM is applied to model the interaction of electromagnetic fields with the objects like antenna, waveguides, aircraft and their environment using Maxwell equations.  In this paper the strength and weakness of various computational electromagnetic techniques are discussed. Performance of various techniques in terms accuracy, memory and computational time for application specific tasks such as modeling RCS (Radar cross section, space applications, thin wires, antenna arrays are presented in this paper.

  7. A Macintosh based data system for array spectrometers (Poster)

    Science.gov (United States)

    Bregman, J.; Moss, N.

    An interactive data aquisition and reduction system has been assembled by combining a Macintosh computer with an instrument controller (an Apple II computer) via an RS-232 interface. The data system provides flexibility for operating different linear array spectrometers. The standard Macintosh interface is used to provide ease of operation and to allow transferring the reduced data to commercial graphics software.

  8. Seismic Background Noise Analysis of BRTR (PS-43) Array

    Science.gov (United States)

    Ezgi Bakir, Mahmure; Meral Ozel, Nurcan; Umut Semin, Korhan

    2015-04-01

    The seismic background noise variation of BRTR array, composed of two sub arrays located in Ankara and in Ankara-Keskin, has been investigated by calculating Power Spectral Density and Probability Density Functions for seasonal and diurnal noise variations between 2005 and 2011. PSDs were computed within the frequency range of 100 s - 10 Hz. The results show us a little change in noise conditions in terms of time and location. Especially, noise level changes were observed at 3-5 Hz in diurnal variations at Keskin array and there is a 5-7 dB difference in day and night time in cultural noise band (1-10 Hz). On the other hand, noise levels of medium period array is high in 1-2 Hz frequency rather than short period array. High noise levels were observed in daily working times when we compare it to night-time in cultural noise band. The seasonal background noise variation at both sites also shows very similar properties to each other. Since these stations are borehole instruments and away from the coasts, we saw a small change in noise levels caused by microseism. Comparison between Keskin short period array and Ankara medium period array show us Keskin array is quiter than Ankara array.

  9. Process for humidifying a gaseous fuel stream

    International Nuclear Information System (INIS)

    Sederquist, R. A.

    1985-01-01

    A fuel gas stream for a fuel cell is humidified by a recirculating hot liquid water stream using the heat of condensation from the humidified stream as the heat to vaporize the liquid water. Humidification is accomplished by directly contacting the liquid water with the dry gas stream in a saturator to evaporate a small portion of water. The recirculating liquid water is reheated by direct contact with the humidified gas stream in a condenser, wherein water is condensed into the liquid stream. Between the steps of humidifying and condensing water from the gas stream it passes through the fuel cell and additional water, in the form of steam, is added thereto

  10. Computer technology and computer programming research and strategies

    CERN Document Server

    Antonakos, James L

    2011-01-01

    Covering a broad range of new topics in computer technology and programming, this volume discusses encryption techniques, SQL generation, Web 2.0 technologies, and visual sensor networks. It also examines reconfigurable computing, video streaming, animation techniques, and more. Readers will learn about an educational tool and game to help students learn computer programming. The book also explores a new medical technology paradigm centered on wireless technology and cloud computing designed to overcome the problems of increasing health technology costs.

  11. Supporting seamless mobility for P2P live streaming.

    Science.gov (United States)

    Kim, Eunsam; Kim, Sangjin; Lee, Choonhwa

    2014-01-01

    With advent of various mobile devices with powerful networking and computing capabilities, the users' demand to enjoy live video streaming services such as IPTV with mobile devices has been increasing rapidly. However, it is challenging to get over the degradation of service quality due to data loss caused by the handover. Although many handover schemes were proposed at protocol layers below the application layer, they inherently suffer from data loss while the network is being disconnected during the handover. We therefore propose an efficient application-layer handover scheme to support seamless mobility for P2P live streaming. By simulation experiments, we show that the P2P live streaming system with our proposed handover scheme can improve the playback continuity significantly compared to that without our scheme.

  12. Supporting Seamless Mobility for P2P Live Streaming

    Directory of Open Access Journals (Sweden)

    Eunsam Kim

    2014-01-01

    Full Text Available With advent of various mobile devices with powerful networking and computing capabilities, the users' demand to enjoy live video streaming services such as IPTV with mobile devices has been increasing rapidly. However, it is challenging to get over the degradation of service quality due to data loss caused by the handover. Although many handover schemes were proposed at protocol layers below the application layer, they inherently suffer from data loss while the network is being disconnected during the handover. We therefore propose an efficient application-layer handover scheme to support seamless mobility for P2P live streaming. By simulation experiments, we show that the P2P live streaming system with our proposed handover scheme can improve the playback continuity significantly compared to that without our scheme.

  13. Theoretical models of Kapton heating in solar array geometries

    Science.gov (United States)

    Morton, Thomas L.

    1992-01-01

    In an effort to understand pyrolysis of Kapton in solar arrays, a computational heat transfer program was developed. This model allows for the different materials and widely divergent length scales of the problem. The present status of the calculation indicates that thin copper traces surrounded by Kapton and carrying large currents can show large temperature increases, but the other configurations seen on solar arrays have adequate heat sinks to prevent substantial heating of the Kapton. Electron currents from the ambient plasma can also contribute to heating of thin traces. Since Kapton is stable at temperatures as high as 600 C, this indicates that it should be suitable for solar array applications. There are indications that the adhesive sued in solar arrays may be a strong contributor to the pyrolysis problem seen in solar array vacuum chamber tests.

  14. Fish populations in Plynlimon streams

    Directory of Open Access Journals (Sweden)

    D. T. Crisp

    1997-01-01

    Full Text Available In Plynlimon streams, brown trout (Salmo trutta L. are widespread in the upper Wye at population densities of 0.03 to 0.32 fish m-2 and show evidence of successful recruitment in most years. In the upper Severn, brown trout are found only in an area of c. 1670 -2 downstream of Blaenhafren Falls at densities of 0.03 to 0.24 fish -2 and the evidence suggests very variable year to year success in recruitment (Crisp & Beaumont, 1996. Analyses of the data show that temperature differences between afforested and unafforested streams may affect the rates of trout incubation and growth but are not likely to influence species survival. Simple analyses of stream discharge data suggest, but do not prove, that good years for recruitment in the Hafren population were years of low stream discharge. This may be linked to groundwater inputs detected in other studies in this stream. More research is needed to explain the survival of the apparently isolated trout population in the Hafren.

  15. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  16. Concurrent array-based queue

    Science.gov (United States)

    Heidelberger, Philip; Steinmacher-Burow, Burkhard

    2015-01-06

    According to one embodiment, a method for implementing an array-based queue in memory of a memory system that includes a controller includes configuring, in the memory, metadata of the array-based queue. The configuring comprises defining, in metadata, an array start location in the memory for the array-based queue, defining, in the metadata, an array size for the array-based queue, defining, in the metadata, a queue top for the array-based queue and defining, in the metadata, a queue bottom for the array-based queue. The method also includes the controller serving a request for an operation on the queue, the request providing the location in the memory of the metadata of the queue.

  17. DOA Estimation of Cylindrical Conformal Array Based on Geometric Algebra

    Directory of Open Access Journals (Sweden)

    Minjie Wu

    2016-01-01

    Full Text Available Due to the variable curvature of the conformal carrier, the pattern of each element has a different direction. The traditional method of analyzing the conformal array is to use the Euler rotation angle and its matrix representation. However, it is computationally demanding especially for irregular array structures. In this paper, we present a novel algorithm by combining the geometric algebra with Multiple Signal Classification (MUSIC, termed as GA-MUSIC, to solve the direction of arrival (DOA for cylindrical conformal array. And on this basis, we derive the pattern and array manifold. Compared with the existing algorithms, our proposed one avoids the cumbersome matrix transformations and largely decreases the computational complexity. The simulation results verify the effectiveness of the proposed method.

  18. EzArray: A web-based highly automated Affymetrix expression array data management and analysis system

    Directory of Open Access Journals (Sweden)

    Zhu Yuelin

    2008-01-01

    Full Text Available Abstract Background Though microarray experiments are very popular in life science research, managing and analyzing microarray data are still challenging tasks for many biologists. Most microarray programs require users to have sophisticated knowledge of mathematics, statistics and computer skills for usage. With accumulating microarray data deposited in public databases, easy-to-use programs to re-analyze previously published microarray data are in high demand. Results EzArray is a web-based Affymetrix expression array data management and analysis system for researchers who need to organize microarray data efficiently and get data analyzed instantly. EzArray organizes microarray data into projects that can be analyzed online with predefined or custom procedures. EzArray performs data preprocessing and detection of differentially expressed genes with statistical methods. All analysis procedures are optimized and highly automated so that even novice users with limited pre-knowledge of microarray data analysis can complete initial analysis quickly. Since all input files, analysis parameters, and executed scripts can be downloaded, EzArray provides maximum reproducibility for each analysis. In addition, EzArray integrates with Gene Expression Omnibus (GEO and allows instantaneous re-analysis of published array data. Conclusion EzArray is a novel Affymetrix expression array data analysis and sharing system. EzArray provides easy-to-use tools for re-analyzing published microarray data and will help both novice and experienced users perform initial analysis of their microarray data from the location of data storage. We believe EzArray will be a useful system for facilities with microarray services and laboratories with multiple members involved in microarray data analysis. EzArray is freely available from http://www.ezarray.com/.

  19. Low-flow characteristics of Virginia streams

    Science.gov (United States)

    Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute

    2011-01-01

    Low-flow annual non-exceedance probabilities (ANEP), called probability-percent chance (P-percent chance) flow estimates, regional regression equations, and transfer methods are provided describing the low-flow characteristics of Virginia streams. Statistical methods are used to evaluate streamflow data. Analysis of Virginia streamflow data collected from 1895 through 2007 is summarized. Methods are provided for estimating low-flow characteristics of gaged and ungaged streams. The 1-, 4-, 7-, and 30-day average streamgaging station low-flow characteristics for 290 long-term, continuous-record, streamgaging stations are determined, adjusted for instances of zero flow using a conditional probability adjustment method, and presented for non-exceedance probabilities of 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.05, 0.02, 0.01, and 0.005. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression equations to estimate annual non-exceedance probabilities at gaged and ungaged sites and are summarized for 290 long-term, continuous-record streamgaging stations, 136 short-term, continuous-record streamgaging stations, and 613 partial-record streamgaging stations. Regional regression equations for six physiographic regions use basin characteristics to estimate 1-, 4-, 7-, and 30-day average low-flow annual non-exceedance probabilities at gaged and ungaged sites. Weighted low-flow values that combine computed streamgaging station low-flow characteristics and annual non-exceedance probabilities from regional regression equations provide improved low-flow estimates. Regression equations developed using the Maintenance of Variance with Extension (MOVE.1) method describe the line of organic correlation (LOC) with an appropriate index site for low-flow characteristics at 136 short-term, continuous-record streamgaging stations and 613 partial-record streamgaging stations. Monthly

  20. Radar techniques using array antennas

    CERN Document Server

    Wirth, Wulf-Dieter

    2013-01-01

    Radar Techniques Using Array Antennas is a thorough introduction to the possibilities of radar technology based on electronic steerable and active array antennas. Topics covered include array signal processing, array calibration, adaptive digital beamforming, adaptive monopulse, superresolution, pulse compression, sequential detection, target detection with long pulse series, space-time adaptive processing (STAP), moving target detection using synthetic aperture radar (SAR), target imaging, energy management and system parameter relations. The discussed methods are confirmed by simulation stud

  1. The Big Optical Array

    International Nuclear Information System (INIS)

    Mozurkewich, D.; Johnston, K.J.; Simon, R.S.

    1990-01-01

    This paper describes the design and the capabilities of the Naval Research Laboratory Big Optical Array (BOA), an interferometric optical array for high-resolution imaging of stars, stellar systems, and other celestial objects. There are four important differences between the BOA design and the design of Mark III Optical Interferometer on Mount Wilson (California). These include a long passive delay line which will be used in BOA to do most of the delay compensation, so that the fast delay line will have a very short travel; the beam combination in BOA will be done in triplets, to allow measurement of closure phase; the same light will be used for both star and fringe tracking; and the fringe tracker will use several wavelength channels

  2. A 4 probe array

    Energy Technology Data Exchange (ETDEWEB)

    Fernando, C E [CEGB, Marchwood Engineering Laboratories, Marchwood, Southampton, Hampshire (United Kingdom)

    1980-11-01

    A NDT system is described which moves away from the present manual method using a single send/receive transducer combination and uses instead an array of four transducers. Four transducers are shown sufficient to define a point reflector with a resolution of m{lambda}z/R where m{lambda} is the minimum detectable path difference in the system (corresponding to a m cycle time resolution), z the range and R the radius of the array. Signal averaging with an input ADC rate of 100 MHz is used with voice output for the range data. Typical resolution measurements in a water tank are presented. We expect a resolution of the order of mm in steel at a range of 80 mm. The system is expected to have applications in automated, high resolution, sizing of defects and in the inspection of austenitic stainless steel welds. (author)

  3. Timed arrays wideband and time varying antenna arrays

    CERN Document Server

    Haupt, Randy L

    2015-01-01

    Introduces timed arrays and design approaches to meet the new high performance standards The author concentrates on any aspect of an antenna array that must be viewed from a time perspective. The first chapters briefly introduce antenna arrays and explain the difference between phased and timed arrays. Since timed arrays are designed for realistic time-varying signals and scenarios, the book also reviews wideband signals, baseband and passband RF signals, polarization and signal bandwidth. Other topics covered include time domain, mutual coupling, wideband elements, and dispersion. The auth

  4. Solar collector array

    Science.gov (United States)

    Hall, John Champlin; Martins, Guy Lawrence

    2015-09-06

    A method and apparatus for efficient manufacture, assembly and production of solar energy. In one aspect, the apparatus may include a number of modular solar receiver assemblies that may be separately manufactured, assembled and individually inserted into a solar collector array housing shaped to receive a plurality of solar receivers. The housing may include optical elements for focusing light onto the individual receivers, and a circuit for electrically connecting the solar receivers.

  5. Photovoltaic cell array

    Science.gov (United States)

    Eliason, J. T. (Inventor)

    1976-01-01

    A photovoltaic cell array consisting of parallel columns of silicon filaments is described. Each fiber is doped to produce an inner region of one polarity type and an outer region of an opposite polarity type to thereby form a continuous radial semi conductor junction. Spaced rows of electrical contacts alternately connect to the inner and outer regions to provide a plurality of electrical outputs which may be combined in parallel or in series.

  6. Phased array antenna control

    Science.gov (United States)

    Doland, G. D. (Inventor)

    1978-01-01

    Several new and useful improvements in steering and control of phased array antennas having a small number of elements, typically on the order of 5 to 17 elements are provided. Among the improvements are increasing the number of beam steering positions, reducing the possibility of phase transients in signals received or transmitted with the antennas, and increasing control and testing capacity with respect to the antennas.

  7. Seismometer array station processors

    International Nuclear Information System (INIS)

    Key, F.A.; Lea, T.G.; Douglas, A.

    1977-01-01

    A description is given of the design, construction and initial testing of two types of Seismometer Array Station Processor (SASP), one to work with data stored on magnetic tape in analogue form, the other with data in digital form. The purpose of a SASP is to detect the short period P waves recorded by a UK-type array of 20 seismometers and to edit these on to a a digital library tape or disc. The edited data are then processed to obtain a rough location for the source and to produce seismograms (after optimum processing) for analysis by a seismologist. SASPs are an important component in the scheme for monitoring underground explosions advocated by the UK in the Conference of the Committee on Disarmament. With digital input a SASP can operate at 30 times real time using a linear detection process and at 20 times real time using the log detector of Weichert. Although the log detector is slower, it has the advantage over the linear detector that signals with lower signal-to-noise ratio can be detected and spurious large amplitudes are less likely to produce a detection. It is recommended, therefore, that where possible array data should be recorded in digital form for input to a SASP and that the log detector of Weichert be used. Trial runs show that a SASP is capable of detecting signals down to signal-to-noise ratios of about two with very few false detections, and at mid-continental array sites it should be capable of detecting most, if not all, the signals with magnitude above msub(b) 4.5; the UK argues that, given a suitable network, it is realistic to hope that sources of this magnitude and above can be detected and identified by seismological means alone. (author)

  8. ATLAS Live: Collaborative Information Streams

    CERN Document Server

    Goldfarb, S; The ATLAS collaboration

    2010-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using the SCALA digital signage software system. The system is robust and flexible, allowing for the usage of scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intrascreen divisibility. The video is made available to the collaboration or public through the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video t...

  9. Stream ciphers and number theory

    CERN Document Server

    Cusick, Thomas W; Renvall, Ari R

    2004-01-01

    This is the unique book on cross-fertilisations between stream ciphers and number theory. It systematically and comprehensively covers known connections between the two areas that are available only in research papers. Some parts of this book consist of new research results that are not available elsewhere. In addition to exercises, over thirty research problems are presented in this book. In this revised edition almost every chapter was updated, and some chapters were completely rewritten. It is useful as a textbook for a graduate course on the subject, as well as a reference book for researchers in related fields. · Unique book on interactions of stream ciphers and number theory. · Research monograph with many results not available elsewhere. · A revised edition with the most recent advances in this subject. · Over thirty research problems for stimulating interactions between the two areas. · Written by leading researchers in stream ciphers and number theory.

  10. Fingerprint multicast in secure video streaming.

    Science.gov (United States)

    Zhao, H Vicky; Liu, K J Ray

    2006-01-01

    Digital fingerprinting is an emerging technology to protect multimedia content from illegal redistribution, where each distributed copy is labeled with unique identification information. In video streaming, huge amount of data have to be transmitted to a large number of users under stringent latency constraints, so the bandwidth-efficient distribution of uniquely fingerprinted copies is crucial. This paper investigates the secure multicast of anticollusion fingerprinted video in streaming applications and analyzes their performance. We first propose a general fingerprint multicast scheme that can be used with most spread spectrum embedding-based multimedia fingerprinting systems. To further improve the bandwidth efficiency, we explore the special structure of the fingerprint design and propose a joint fingerprint design and distribution scheme. From our simulations, the two proposed schemes can reduce the bandwidth requirement by 48% to 87%, depending on the number of users, the characteristics of video sequences, and the network and computation constraints. We also show that under the constraint that all colluders have the same probability of detection, the embedded fingerprints in the two schemes have approximately the same collusion resistance. Finally, we propose a fingerprint drift compensation scheme to improve the quality of the reconstructed sequences at the decoder's side without introducing extra communication overhead.

  11. Depth Images Filtering In Distributed Streaming

    Directory of Open Access Journals (Sweden)

    Dziubich Tomasz

    2016-04-01

    Full Text Available In this paper, we propose a distributed system for point cloud processing and transferring them via computer network regarding to effectiveness-related requirements. We discuss the comparison of point cloud filters focusing on their usage for streaming optimization. For the filtering step of the stream pipeline processing we evaluate four filters: Voxel Grid, Radial Outliner Remover, Statistical Outlier Removal and Pass Through. For each of the filters we perform a series of tests for evaluating the impact on the point cloud size and transmitting frequency (analysed for various fps ratio. We present results of the optimization process used for point cloud consolidation in a distributed environment. We describe the processing of the point clouds before and after the transmission. Pre- and post-processing allow the user to send the cloud via network without any delays. The proposed pre-processing compression of the cloud and the post-processing reconstruction of it are focused on assuring that the end-user application obtains the cloud with a given precision.

  12. Lectin-Array Blotting.

    Science.gov (United States)

    Pazos, Raquel; Echevarria, Juan; Hernandez, Alvaro; Reichardt, Niels-Christian

    2017-09-01

    Aberrant protein glycosylation is a hallmark of cancer, infectious diseases, and autoimmune or neurodegenerative disorders. Unlocking the potential of glycans as disease markers will require rapid and unbiased glycoproteomics methods for glycan biomarker discovery. The present method is a facile and rapid protocol for qualitative analysis of protein glycosylation in complex biological mixtures. While traditional lectin arrays only provide an average signal for the glycans in the mixture, which is usually dominated by the most abundant proteins, our method provides individual lectin binding profiles for all proteins separated in the gel electrophoresis step. Proteins do not have to be excised from the gel for subsequent analysis via the lectin array but are transferred by contact diffusion from the gel to a glass slide presenting multiple copies of printed lectin arrays. Fluorescently marked glycoproteins are trapped by the printed lectins via specific carbohydrate-lectin interactions and after a washing step their binding profile with up to 20 lectin probes is analyzed with a fluorescent scanner. The method produces the equivalent of 20 lectin blots in a single experiment, giving detailed insight into the binding epitopes present in the fractionated proteins. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  13. Coulomb gap triptych in a periodic array of metal nanocrystals.

    Science.gov (United States)

    Chen, Tianran; Skinner, Brian; Shklovskii, B I

    2012-09-21

    The Coulomb gap in the single-particle density of states (DOS) is a universal consequence of electron-electron interaction in disordered systems with localized electron states. Here we show that in arrays of monodisperse metallic nanocrystals, there is not one but three identical adjacent Coulomb gaps, which together form a structure that we call a "Coulomb gap triptych." We calculate the DOS and the conductivity in two- and three-dimensional arrays using a computer simulation. Unlike in the conventional Coulomb glass models, in nanocrystal arrays the DOS has a fixed width in the limit of large disorder. The Coulomb gap triptych can be studied via tunneling experiments.

  14. Temperature of the Gulf Stream

    Science.gov (United States)

    2002-01-01

    The Gulf Stream is one of the strong ocean currents that carries warm water from the sunny tropics to higher latitudes. The current stretches from the Gulf of Mexico up the East Coast of the United States, departs from North America south of the Chesapeake Bay, and heads across the Atlantic to the British Isles. The water within the Gulf Stream moves at the stately pace of 4 miles per hour. Even though the current cools as the water travels thousands of miles, it remains strong enough to moderate the Northern European climate. The image above was derived from the infrared measurements of the Moderate-resolution Imaging Spectroradiometer (MODIS) on a nearly cloud-free day over the east coast of the United States. The coldest waters are shown as purple, with blue, green, yellow, and red representing progressively warmer water. Temperatures range from about 7 to 22 degrees Celsius. The core of the Gulf Stream is very apparent as the warmest water, dark red. It departs from the coast at Cape Hatteras, North Carolina. The cool, shelf water from the north entrains the warmer outflows from the Chesapeake and Delaware Bays. The north wall of the Gulf Stream reveals very complex structure associated with frontal instabilities that lead to exchanges between the Gulf Stream and inshore waters. Several clockwise-rotating warm core eddies are evident north of the core of the Gulf Stream, which enhance the exchange of heat and water between the coastal and deep ocean. Cold core eddies, which rotate counter clockwise, are seen south of the Gulf Stream. The one closest to Cape Hatteras is entraining very warm Gulf Stream waters on its northwest circumference. Near the coast, shallower waters have warmed due to solar heating, while the deeper waters offshore are markedly cooler (dark blue). MODIS made this observation on May 8, 2000, at 11:45 a.m. EDT. For more information, see the MODIS-Ocean web page. The sea surface temperature image was created at the University of Miami using

  15. Measuring nutrient spiralling in streams

    Energy Technology Data Exchange (ETDEWEB)

    Newbold, J D; Elwood, J W; O' Neill, R V; Van Winkle, W

    1981-01-01

    Nutrient cycling in streams involves some downstream transport before the cycle is completed. Thus, the path traveled by a nutrient atom in passing through the cycle can be visualized as a spiral. As an index of the spiralling process, we introduce spiralling length, defined as the average distance associated with one complete cycle of a nutrient atom. This index provides a measure of the utilization of nutrients relative to the available supply from upstream. Using /sup 32/p as a tracer, we estimated a spiralling length of 193 m for phosphorus in a small woodland stream.

  16. Stream-processing pipelines: processing of streams on multiprocessor architecture

    NARCIS (Netherlands)

    Kavaldjiev, N.K.; Smit, Gerardus Johannes Maria; Jansen, P.G.

    In this paper we study the timing aspects of the operation of stream-processing applications that run on a multiprocessor architecture. Dependencies are derived for the processing and communication times of the processors in such a system. Three cases of real-time constrained operation and four

  17. CAMS: OLAPing Multidimensional Data Streams Efficiently

    Science.gov (United States)

    Cuzzocrea, Alfredo

    In the context of data stream research, taming the multidimensionality of real-life data streams in order to efficiently support OLAP analysis/mining tasks is a critical challenge. Inspired by this fundamental motivation, in this paper we introduce CAMS (C ube-based A cquisition model for M ultidimensional S treams), a model for efficiently OLAPing multidimensional data streams. CAMS combines a set of data stream processing methodologies, namely (i) the OLAP dimension flattening process, which allows us to obtain dimensionality reduction of multidimensional data streams, and (ii) the OLAP stream aggregation scheme, which aggregates data stream readings according to an OLAP-hierarchy-based membership approach. We complete our analytical contribution by means of experimental assessment and analysis of both the efficiency and the scalability of OLAPing capabilities of CAMS on synthetic multidimensional data streams. Both analytical and experimental results clearly connote CAMS as an enabling component for next-generation Data Stream Management Systems.

  18. Networked Rectenna Array for Smart Material Actuators

    Science.gov (United States)

    Choi, Sang H.; Golembiewski, Walter T.; Song, Kyo D.

    2000-01-01

    The concept of microwave-driven smart material actuators is envisioned as the best option to alleviate the complexity associated with hard-wired control circuitry. Networked rectenna patch array receives and converts microwave power into a DC power for an array of smart actuators. To use microwave power effectively, the concept of a power allocation and distribution (PAD) circuit is adopted for networking a rectenna/actuator patch array. The PAD circuit is imbedded into a single embodiment of rectenna and actuator array. The thin-film microcircuit embodiment of PAD circuit adds insignificant amount of rigidity to membrane flexibility. Preliminary design and fabrication of PAD circuitry that consists of a few nodal elements were made for laboratory testing. The networked actuators were tested to correlate the network coupling effect, power allocation and distribution, and response time. The features of preliminary design are 16-channel computer control of actuators by a PCI board and the compensator for a power failure or leakage of one or more rectennas.

  19. Romanian earthquakes analysis using BURAR seismic array

    International Nuclear Information System (INIS)

    Borleanu, Felix; Rogozea, Maria; Nica, Daniela; Popescu, Emilia; Popa, Mihaela; Radulian, Mircea

    2008-01-01

    Bucovina seismic array (BURAR) is a medium-aperture array, installed in 2002 in the northern part of Romania (47.61480 N latitude, 25.21680 E longitude, 1150 m altitude), as a result of the cooperation between Air Force Technical Applications Center, USA and National Institute for Earth Physics, Romania. The array consists of ten elements, located in boreholes and distributed over a 5 x 5 km 2 area; nine with short-period vertical sensors and one with a broadband three-component sensor. Since the new station has been operating the earthquake survey of Romania's territory has been significantly improved. Data recorded by BURAR during 01.01.2005 - 12.31.2005 time interval are first processed and analyzed, in order to establish the array detection capability of the local earthquakes, occurred in different Romanian seismic zones. Subsequently a spectral ratios technique was applied in order to determine the calibration relationships for magnitude, using only the information gathered by BURAR station. The spectral ratios are computed relatively to a reference event, considered as representative for each seismic zone. This method has the advantage to eliminate the path effects. The new calibration procedure is tested for the case of Vrancea intermediate-depth earthquakes and proved to be very efficient in constraining the size of these earthquakes. (authors)

  20. Microprocessor system to recover data from a self-scanning photodiode array

    International Nuclear Information System (INIS)

    Koppel, L.N.; Gadd, T.J.

    1975-01-01

    A microprocessor system developed at Lawrence Livermore Laboratory has expedited the recovery of data describing the low energy x-ray spectra radiated by laser-fusion targets. An Intel microprocessor controls the digitization and scanning of the data stream of an x-ray-sensitive self-scanning photodiode array incorporated in a crystal diffraction spectrometer

  1. Salamander occupancy in headwater stream networks

    Science.gov (United States)

    Grant, E.H.C.; Green, L.E.; Lowe, W.H.

    2009-01-01

    1. Stream ecosystems exhibit a highly consistent dendritic geometry in which linear habitat units intersect to create a hierarchical network of connected branches. 2. Ecological and life history traits of species living in streams, such as the potential for overland movement, may interact with this architecture to shape patterns of occupancy and response to disturbance. Specifically, large-scale habitat alteration that fragments stream networks and reduces connectivity may reduce the probability a stream is occupied by sensitive species, such as stream salamanders. 3. We collected habitat occupancy data on four species of stream salamanders in first-order (i.e. headwater) streams in undeveloped and urbanised regions of the eastern U.S.A. We then used an information-theoretic approach to test alternative models of salamander occupancy based on a priori predictions of the effects of network configuration, region and salamander life history. 4. Across all four species, we found that streams connected to other first-order streams had higher occupancy than those flowing directly into larger streams and rivers. For three of the four species, occupancy was lower in the urbanised region than in the undeveloped region. 5. These results demonstrate that the spatial configuration of stream networks within protected areas affects the occurrences of stream salamander species. We strongly encourage preservation of network connections between first-order streams in conservation planning and management decisions that may affect stream species.

  2. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  3. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  4. Acoustic Source Localization via Subspace Based Method Using Small Aperture MEMS Arrays

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2014-01-01

    Full Text Available Small aperture microphone arrays provide many advantages for portable devices and hearing aid equipment. In this paper, a subspace based localization method is proposed for acoustic source using small aperture arrays. The effects of array aperture on localization are analyzed by using array response (array manifold. Besides array aperture, the frequency of acoustic source and the variance of signal power are simulated to demonstrate how to optimize localization performance, which is carried out by introducing frequency error with the proposed method. The proposed method for 5 mm array aperture is validated by simulations and experiments with MEMS microphone arrays. Different types of acoustic sources can be localized with the highest precision of 6 degrees even in the presence of wind noise and other noises. Furthermore, the proposed method reduces the computational complexity compared with other methods.

  5. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    Science.gov (United States)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  6. Analysis of streaming media systems

    NARCIS (Netherlands)

    Lu, Y.

    2010-01-01

    Multimedia services have been popping up at tremendous speed in recent years. A large number of these multimedia streaming systems are introduced to the consumer market. Internet Service Providers, Telecommunications Operators, Service/Content Providers, and end users are interested in the

  7. Stretch-minimising stream surfaces

    KAUST Repository

    Barton, Michael; Kosinka, Jin; Calo, Victor M.

    2015-01-01

    We study the problem of finding stretch-minimising stream surfaces in a divergence-free vector field. These surfaces are generated by motions of seed curves that propagate through the field in a stretch minimising manner, i.e., they move without stretching or shrinking, preserving the length of their arbitrary arc. In general fields, such curves may not exist. How-ever, the divergence-free constraint gives rise to these 'stretch-free' curves that are locally arc-length preserving when infinitesimally propagated. Several families of stretch-free curves are identified and used as initial guesses for stream surface generation. These surfaces are subsequently globally optimised to obtain the best stretch-minimising stream surfaces in a given divergence-free vector field. Our algorithm was tested on benchmark datasets, proving its applicability to incompressible fluid flow simulations, where our stretch-minimising stream surfaces realistically reflect the flow of a flexible univariate object. © 2015 Elsevier Inc. All rights reserved.

  8. ALIENS IN WESTERN STREAM ECOSYSTEMS

    Science.gov (United States)

    The USEPA's Environmental Monitoring and Assessment Program conducted a five year probability sample of permanent mapped streams in 12 western US states. The study design enables us to determine the extent of selected riparian invasive plants, alien aquatic vertebrates, and some ...

  9. Stretch-minimising stream surfaces

    KAUST Repository

    Barton, Michael

    2015-05-01

    We study the problem of finding stretch-minimising stream surfaces in a divergence-free vector field. These surfaces are generated by motions of seed curves that propagate through the field in a stretch minimising manner, i.e., they move without stretching or shrinking, preserving the length of their arbitrary arc. In general fields, such curves may not exist. How-ever, the divergence-free constraint gives rise to these \\'stretch-free\\' curves that are locally arc-length preserving when infinitesimally propagated. Several families of stretch-free curves are identified and used as initial guesses for stream surface generation. These surfaces are subsequently globally optimised to obtain the best stretch-minimising stream surfaces in a given divergence-free vector field. Our algorithm was tested on benchmark datasets, proving its applicability to incompressible fluid flow simulations, where our stretch-minimising stream surfaces realistically reflect the flow of a flexible univariate object. © 2015 Elsevier Inc. All rights reserved.

  10. Nuclear reactor cavity streaming shield

    International Nuclear Information System (INIS)

    Klotz, R.J.; Stephen, D.W.

    1978-01-01

    The upper portion of a nuclear reactor vessel supported in a concrete reactor cavity has a structure mounted below the top of the vessel between the outer vessel wall and the reactor cavity wall which contains hydrogenous material which will attenuate radiation streaming upward between vessel and the reactor cavity wall while preventing pressure buildup during a loss of coolant accident

  11. Video Streaming in Online Learning

    Science.gov (United States)

    Hartsell, Taralynn; Yuen, Steve Chi-Yin

    2006-01-01

    The use of video in teaching and learning is a common practice in education today. As learning online becomes more of a common practice in education, streaming video and audio will play a bigger role in delivering course materials to online learners. This form of technology brings courses alive by allowing online learners to use their visual and…

  12. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  13. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  14. Aeroacoustics of Three-Stream Jets

    Science.gov (United States)

    Henderson, Brenda S.

    2012-01-01

    Results from acoustic measurements of noise radiated from a heated, three-stream, co-annular exhaust system operated at subsonic conditions are presented. The experiments were conducted for a range of core, bypass, and tertiary stream temperatures and pressures. The nozzle system had a fan-to-core area ratio of 2.92 and a tertiary-to-core area ratio of 0.96. The impact of introducing a third stream on the radiated noise for third-stream velocities below that of the bypass stream was to reduce high frequency noise levels at broadside and peak jet-noise angles. Mid-frequency noise radiation at aft observation angles was impacted by the conditions of the third stream. The core velocity had the greatest impact on peak noise levels and the bypass-to-core mass flow ratio had a slight impact on levels in the peak jet-noise direction. The third-stream jet conditions had no impact on peak noise levels. Introduction of a third jet stream in the presence of a simulated forward-flight stream limits the impact of the third stream on radiated noise. For equivalent ideal thrust conditions, two-stream and three-stream jets can produce similar acoustic spectra although high-frequency noise levels tend to be lower for the three-stream jet.

  15. Streaming Visual Analytics Workshop Report

    Energy Technology Data Exchange (ETDEWEB)

    Cook, Kristin A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Burtner, Edwin R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kritzstein, Brian P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Brisbois, Brooke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mitson, Anna E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-03-31

    How can we best enable users to understand complex emerging events and make appropriate assessments from streaming data? This was the central question addressed at a three-day workshop on streaming visual analytics. This workshop was organized by Pacific Northwest National Laboratory for a government sponsor. It brought together forty researchers and subject matter experts from government, industry, and academia. This report summarizes the outcomes from that workshop. It describes elements of the vision for a streaming visual analytic environment and set of important research directions needed to achieve this vision. Streaming data analysis is in many ways the analysis and understanding of change. However, current visual analytics systems usually focus on static data collections, meaning that dynamically changing conditions are not appropriately addressed. The envisioned mixed-initiative streaming visual analytics environment creates a collaboration between the analyst and the system to support the analysis process. It raises the level of discourse from low-level data records to higher-level concepts. The system supports the analyst’s rapid orientation and reorientation as situations change. It provides an environment to support the analyst’s critical thinking. It infers tasks and interests based on the analyst’s interactions. The system works as both an assistant and a devil’s advocate, finding relevant data and alerts as well as considering alternative hypotheses. Finally, the system supports sharing of findings with others. Making such an environment a reality requires research in several areas. The workshop discussions focused on four broad areas: support for critical thinking, visual representation of change, mixed-initiative analysis, and the use of narratives for analysis and communication.

  16. Tidal Turbines’ Layout in a Stream with Asymmetry and Misalignment

    Directory of Open Access Journals (Sweden)

    Nicolas Guillou

    2017-11-01

    Full Text Available A refined assessment of tidal currents variability is a prerequisite for successful turbine deployment in the marine environment. However, the numerical evaluation of the tidal kinetic energy resource relies, most of the time, on integrated parameters, such as the averaged or maximum stream powers. Predictions from a high resolution three-dimensional model are exploited here to characterize the asymmetry and misalignment between the flood and ebb tidal currents in the “Raz de Sein”, a strait off western Brittany (France with strong potential for array development. A series of parameters is considered to assess resource variability and refine the cartography of local potential tidal stream energy sites. The strait is characterized by strong tidal flow divergence with currents’ asymmetry liable to vary output power by 60% over a tidal cycle. Pronounced misalignments over 20 ∘ are furthermore identified in a great part of energetic locations, and this may account for a deficit of the monthly averaged extractable energy by more than 12%. As sea space is limited for turbines, it is finally suggested to aggregate flood and ebb-dominant stream powers on both parts of the strait to output energy with reduced asymmetry.

  17. Educational Cosmic Ray Arrays

    International Nuclear Information System (INIS)

    Soluk, R. A.

    2006-01-01

    In the last decade a great deal of interest has arisen in using sparse arrays of cosmic ray detectors located at schools as a means of doing both outreach and physics research. This approach has the unique advantage of involving grade school students in an actual ongoing experiment, rather then a simple teaching exercise, while at the same time providing researchers with the basic infrastructure for installation of cosmic ray detectors. A survey is made of projects in North America and Europe and in particular the ALTA experiment at the University of Alberta which was the first experiment operating under this paradigm

  18. Storage array reflection considerations

    International Nuclear Information System (INIS)

    Haire, M.J.; Jordan, W.C.; Taylor, R.G.

    1997-01-01

    The assumptions used for reflection conditions of single containers are fairly well established and consistently applied throughout the industry in nuclear criticality safety evaluations. Containers are usually considered to be either fully water-reflected (i.e. surrounded by 6 to 12 in. of water) for safety calculations or reflected by 1 in. of water for nominal (structural material and air) conditions. Tables and figures are usually available for performing comparative evaluations of containers under various loading conditions. Reflection considerations used for evaluating the safety of storage arrays of fissile material are not as well established

  19. Asymmetrical floating point array processors, their application to exploration and exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Geriepy, B L

    1983-01-01

    An asymmetrical floating point array processor is a special-purpose scientific computer which operates under asymmetrical control of a host computer. Although an array processor can receive fixed point input and produce fixed point output, its primary mode of operation is floating point. The first generation of array processors was oriented towards time series information. The next generation of array processors has proved much more versatile and their applicability ranges from petroleum reservoir simulation to speech syntheses. Array processors are becoming commonplace in mining, the primary usage being construction of grids-by usual methods or by kriging. The Australian mining community is among the world's leaders in regard to computer-assisted exploration and exploitation systems. Part of this leadership role must be providing guidance to computer vendors in regard to current and future requirements.

  20. Array processing for seismic surface waves

    Energy Technology Data Exchange (ETDEWEB)

    Marano, S.

    2013-07-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries.

  1. Dynamic array of dark optical traps

    DEFF Research Database (Denmark)

    Daria, V.R.; Rodrigo, P.J.; Glückstad, J.

    2004-01-01

    A dynamic array of dark optical traps is generated for simultaneous trapping and arbitrary manipulation of multiple low-index microstructures. The dynamic intensity patterns forming the dark optical trap arrays are generated using a nearly loss-less phase-to-intensity conversion of a phase......-encoded coherent light source. Two-dimensional input phase distributions corresponding to the trapping patterns are encoded using a computer-programmable spatial light modulator, enabling each trap to be shaped and moved arbitrarily within the plane of observation. We demonstrate the generation of multiple dark...... optical traps for simultaneous manipulation of hollow "air-filled" glass microspheres suspended in an aqueous medium. (C) 2004 American Institute of Physics....

  2. Array processing for seismic surface waves

    International Nuclear Information System (INIS)

    Marano, S.

    2013-01-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries

  3. Parametric analysis of ATM solar array.

    Science.gov (United States)

    Singh, B. K.; Adkisson, W. B.

    1973-01-01

    The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.

  4. Parallel-Bit Stream for Securing Iris Recognition

    OpenAIRE

    Elsayed Mostafa; Maher Mansour; Heba Saad

    2012-01-01

    Biometrics-based authentication schemes have usability advantages over traditional password-based authentication schemes. However, biometrics raises several privacy concerns, it has disadvantages comparing to traditional password in which it is not secured and non revocable. In this paper, we propose a fast method for securing revocable iris template using parallel-bit stream watermarking to overcome these problems. Experimental results prove that the proposed method has low computation time ...

  5. A Characterization and Evaluation of Coal Liquefaction Process Streams

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-10-01

    An updated assessment of the physico-chemical analytical methodology applicable to coal-liquefaction product streams and a review of the literature dealing with the modeling of fossil-fuel resid conversion to product oils are presented in this document. In addition, a summary is provided for the University of Delaware program conducted under this contract to develop an empirical test to determine relative resid reactivity and to construct a computer model to describe resid structure and predict reactivity.

  6. A recirculating stream aquarium for ecological studies.

    Science.gov (United States)

    Gordon H. Reeves; Fred H. Everest; Carl E. McLemore

    1983-01-01

    Investigations of the ecological behavior of fishes often require studies in both natural and artificial stream environments. We describe a large, recirculating stream aquarium and its controls, constructed for ecological studies at the Forestry Sciences Laboratory in Corvallis.

  7. Comparison of active and passive stream restoration

    DEFF Research Database (Denmark)

    Kristensen, Esben Astrup; Thodsen, Hans; Dehli, Bjarke

    2013-01-01

    Modification and channelization of streams and rivers have been conducted extensively throughout the world during the past century. Subsequently, much effort has been directed at re-creating the lost habitats and thereby improving living conditions for aquatic organisms. However, as restoration...... methods are plentiful, it is difficult to determine which one to use to get the anticipated result. The aim of this study was to compare two commonly used methods in small Danish streams to improve the physical condition: re-meandering and passive restoration through cease of maintenance. Our...... investigation included measurement of the physical conditions in 29 stream reaches covering four different groups: (1) re-meandered streams, (2) LDC streams (the least disturbed streams available), (3) passively restored streams (>10 years stop of aintenance) and (4) channelized and non-restored streams. The in...

  8. A survey on Big Data Stream Mining

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... huge amount of stream like telecommunication systems. So, there ... streams have many challenges for data mining algorithm design like using of ..... A. Bifet and R. Gavalda, "Learning from Time-Changing Data with. Adaptive ...

  9. Cytoplasmic Streaming in the Drosophila Oocyte.

    Science.gov (United States)

    Quinlan, Margot E

    2016-10-06

    Objects are commonly moved within the cell by either passive diffusion or active directed transport. A third possibility is advection, in which objects within the cytoplasm are moved with the flow of the cytoplasm. Bulk movement of the cytoplasm, or streaming, as required for advection, is more common in large cells than in small cells. For example, streaming is observed in elongated plant cells and the oocytes of several species. In the Drosophila oocyte, two stages of streaming are observed: relatively slow streaming during mid-oogenesis and streaming that is approximately ten times faster during late oogenesis. These flows are implicated in two processes: polarity establishment and mixing. In this review, I discuss the underlying mechanism of streaming, how slow and fast streaming are differentiated, and what we know about the physiological roles of the two types of streaming.

  10. Stream Tables and Watershed Geomorphology Education.

    Science.gov (United States)

    Lillquist, Karl D.; Kinner, Patricia W.

    2002-01-01

    Reviews copious stream tables and provides a watershed approach to stream table exercises. Results suggest that this approach to learning the concepts of fluvial geomorphology is effective. (Contains 39 references.) (DDR)

  11. Selecting Sums in Arrays

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2008-01-01

    In an array of n numbers each of the \\binomn2+nUnknown control sequence '\\binom' contiguous subarrays define a sum. In this paper we focus on algorithms for selecting and reporting maximal sums from an array of numbers. First, we consider the problem of reporting k subarrays inducing the k largest...... sums among all subarrays of length at least l and at most u. For this problem we design an optimal O(n + k) time algorithm. Secondly, we consider the problem of selecting a subarray storing the k’th largest sum. For this problem we prove a time bound of Θ(n · max {1,log(k/n)}) by describing...... an algorithm with this running time and by proving a matching lower bound. Finally, we combine the ideas and obtain an O(n· max {1,log(k/n)}) time algorithm that selects a subarray storing the k’th largest sum among all subarrays of length at least l and at most u....

  12. The Atacama Large Millimeter Array (ALMA)

    Science.gov (United States)

    1999-06-01

    will be extremely sensitive to radiation at milllimeter and submillimeter wavelengths. The large number of antennas gives a total collecting area of over 7000 square meters, larger than a football field. At the same time, the shape of the surface of each antenna must be extremely precise under all conditions; the overall accuracy over the entire 12-m diameter must be better than 0.025 millimeters (25µm), or one-third of the diameter of a human hair. The combination of large collecting area and high precision results in extremely high sensitivity to faint cosmic signals. The telescope must also be able to resolve the fine details of the objects it detects. In order to do this at millimeter wavelengths the effective diameter of the overall telescope must be very large - about 10 km. As it is impossible to build a single antenna with this diameter, an array of antennas is used instead, with the outermost antennas being 10 km apart. By combining the signals from all antennas together in a large central computer, it is possible to synthesize the effect of a single dish 10 km across. The resulting angular resolution is about 10 milli-arcseconds, less than one-thousandth the angular size of Saturn. Exciting research perspectives The scientific case for this revolutionary telescope is overwhelming. ALMA will make it possible to witness the formation of the earliest and most distant galaxies. It will also look deep into the dust-obscured regions where stars are born, to examine the details of star and planet formation. But ALMA will go far beyond these main science drivers, and will have a major impact on virtually all areas of astronomy. It will be a millimeter-wave counterpart to the most powerful optical/infrared telescopes such as ESO's Very Large Telescope (VLT) and the Hubble Space Telescope, with the additional advantage of being unhindered by cosmic dust opacity. The first galaxies in the Universe are expected to become rapidly enshrouded in the dust produced by the

  13. Computing for calculus

    CERN Document Server

    Christensen, Mark J

    1981-01-01

    Computing for Calculus focuses on BASIC as the computer language used for solving calculus problems.This book discusses the input statement for numeric variables, advanced intrinsic functions, numerical estimation of limits, and linear approximations and tangents. The elementary estimation of areas, numerical and string arrays, line drawing algorithms, and bisection and secant method are also elaborated. This text likewise covers the implicit functions and differentiation, upper and lower rectangular estimates, Simpson's rule and parabolic approximation, and interpolating polynomials. Other to

  14. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  15. Stream dynamics: An overview for land managers

    Science.gov (United States)

    Burchard H. Heede

    1980-01-01

    Concepts of stream dynamics are demonstrated through discussion of processes and process indicators; theory is included only where helpful to explain concepts. Present knowledge allows only qualitative prediction of stream behavior. However, such predictions show how management actions will affect the stream and its environment.

  16. Energy from streaming current and potential

    NARCIS (Netherlands)

    Olthuis, Wouter; Schippers, Bob; Eijkel, Jan C.T.; van den Berg, Albert

    2005-01-01

    It is investigated how much energy can be delivered by a streaming current source. A streaming current and subsequent streaming potential originate when double layer charge is transported by hydrodynamic flow. Theory and a network model of such a source is presented and initial experimental results

  17. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  18. Olefin Recovery from Chemical Industry Waste Streams

    Energy Technology Data Exchange (ETDEWEB)

    A.R. Da Costa; R. Daniels; A. Jariwala; Z. He; A. Morisato; I. Pinnau; J.G. Wijmans

    2003-11-21

    The objective of this project was to develop a membrane process to separate olefins from paraffins in waste gas streams as an alternative to flaring or distillation. Flaring these streams wastes their chemical feedstock value; distillation is energy and capital cost intensive, particularly for small waste streams.

  19. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  20. Development of a cross-section based stream package for MODFLOW

    Science.gov (United States)

    Ou, G.; Chen, X.; Irmak, A.

    2012-12-01

    Accurate simulation of stream-aquifer interactions for wide rivers using the streamflow routing package in MODFLOW is very challenging. To better represent a wide river spanning over multiple model grid cells, a Cross-Section based streamflow Routing (CSR) package is developed and incorporated into MODFLOW to simulate the interaction between streams and aquifers. In the CSR package, a stream segment is represented as a four-point polygon instead of a polyline which is traditionally used in streamflow routing simulation. Each stream segment is composed of upstream and downstream cross-sections. A cross-section consists of a number of streambed points possessing coordinates, streambed thicknesses and streambed hydraulic conductivities to describe the streambed geometry and hydraulic properties. The left and right end points are used to determine the locations of the stream segments. According to the cross-section geometry and hydraulic properties, CSR calculates the new stream stage at the cross-section using the Brent's method to solve the Manning's Equation. A module is developed to automatically compute the area of the stream segment polygon on each intersected MODFLOW grid cell as the upstream and downstream stages change. The stream stage and streambed hydraulic properties of model grids are interpolated based on the streambed points. Streambed leakage is computed as a function of streambed conductance and difference between the groundwater level and stream stage. The Muskingum-Cunge flow routing scheme with variable parameters is used to simulate the streamflow as the groundwater (discharge or recharge) contributes as lateral flows. An example is used to illustrate the capabilities of the CSR package. The result shows that the CSR is applicable to describing the spatial and temporal variation in the interaction between streams and aquifers. The input data become simple due to that the internal program automatically interpolates the cross-section data to each

  1. The Cherenkov Telescope Array production system for Monte Carlo simulations and analysis

    Science.gov (United States)

    Arrabito, L.; Bernloehr, K.; Bregeon, J.; Cumani, P.; Hassan, T.; Haupt, A.; Maier, G.; Moralejo, A.; Neyroud, N.; pre="for the"> CTA Consortium, DIRAC Consortium,

    2017-10-01

    The Cherenkov Telescope Array (CTA), an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale, is the next-generation instrument in the field of very high energy gamma-ray astronomy. An average data stream of about 0.9 GB/s for about 1300 hours of observation per year is expected, therefore resulting in 4 PB of raw data per year and a total of 27 PB/year, including archive and data processing. The start of CTA operation is foreseen in 2018 and it will last about 30 years. The installation of the first telescopes in the two selected locations (Paranal, Chile and La Palma, Spain) will start in 2017. In order to select the best site candidate to host CTA telescopes (in the Northern and in the Southern hemispheres), massive Monte Carlo simulations have been performed since 2012. Once the two sites have been selected, we have started new Monte Carlo simulations to determine the optimal array layout with respect to the obtained sensitivity. Taking into account that CTA may be finally composed of 7 different telescope types coming in 3 different sizes, many different combinations of telescope position and multiplicity as a function of the telescope type have been proposed. This last Monte Carlo campaign represented a huge computational effort, since several hundreds of telescope positions have been simulated, while for future instrument response function simulations, only the operating telescopes will be considered. In particular, during the last 18 months, about 2 PB of Monte Carlo data have been produced and processed with different analysis chains, with a corresponding overall CPU consumption of about 125 M HS06 hours. In these proceedings, we describe the employed computing model, based on the use of grid resources, as well as the production system setup, which relies on the DIRAC interware. Finally, we present the envisaged evolutions of the CTA production system for the off-line data processing during CTA operations and

  2. Academic Self-Concepts in Ability Streams: Considering Domain Specificity and Same-Stream Peers

    Science.gov (United States)

    Liem, Gregory Arief D.; McInerney, Dennis M.; Yeung, Alexander S.

    2015-01-01

    The study examined the relations between academic achievement and self-concepts in a sample of 1,067 seventh-grade students from 3 core ability streams in Singapore secondary education. Although between-stream differences in achievement were large, between-stream differences in academic self-concepts were negligible. Within each stream, levels of…

  3. The long term response of stream flow to climatic warming in headwater streams of interior Alaska

    Science.gov (United States)

    Jeremy B. Jones; Amanda J. Rinehart

    2010-01-01

    Warming in the boreal forest of interior Alaska will have fundamental impacts on stream ecosystems through changes in stream hydrology resulting from upslope loss of permafrost, alteration of availability of soil moisture, and the distribution of vegetation. We examined stream flow in three headwater streams of the Caribou-Poker Creeks Research Watershed (CPCRW) in...

  4. Combinatorial aspects of covering arrays

    Directory of Open Access Journals (Sweden)

    Charles J. Colbourn

    2004-11-01

    Full Text Available Covering arrays generalize orthogonal arrays by requiring that t -tuples be covered, but not requiring that the appearance of t -tuples be balanced.Their uses in screening experiments has found application in software testing, hardware testing, and a variety of fields in which interactions among factors are to be identified. Here a combinatorial view of covering arrays is adopted, encompassing basic bounds, direct constructions, recursive constructions, algorithmic methods, and applications.

  5. The significance of small streams

    Science.gov (United States)

    Wohl, Ellen

    2017-09-01

    Headwaters, defined here as first- and secondorder streams, make up 70%‒80% of the total channel length of river networks. These small streams exert a critical influence on downstream portions of the river network by: retaining or transmitting sediment and nutrients; providing habitat and refuge for diverse aquatic and riparian organisms; creating migration corridors; and governing connectivity at the watershed-scale. The upstream-most extent of the channel network and the longitudinal continuity and lateral extent of headwaters can be difficult to delineate, however, and people are less likely to recognize the importance of headwaters relative to other portions of a river network. Consequently, headwaters commonly lack the legal protections accorded to other portions of a river network and are more likely to be significantly altered or completely obliterated by land use.

  6. Nanoelectrode array for electrochemical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yelton, William G [Sandia Park, NM; Siegal, Michael P [Albuquerque, NM

    2009-12-01

    A nanoelectrode array comprises a plurality of nanoelectrodes wherein the geometric dimensions of the electrode controls the electrochemical response, and the current density is independent of time. By combining a massive array of nanoelectrodes in parallel, the current signal can be amplified while still retaining the beneficial geometric advantages of nanoelectrodes. Such nanoelectrode arrays can be used in a sensor system for rapid, non-contaminating field analysis. For example, an array of suitably functionalized nanoelectrodes can be incorporated into a small, integrated sensor system that can identify many species rapidly and simultaneously under field conditions in high-resistivity water, without the need for chemical addition to increase conductivity.

  7. Array architectures for iterative algorithms

    Science.gov (United States)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  8. Using the OOI Cabled Array HD Camera to Explore Geophysical and Oceanographic Problems at Axial Seamount

    Science.gov (United States)

    Crone, T. J.; Knuth, F.; Marburg, A.

    2016-12-01

    A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.

  9. Detectability of weakly interacting massive particles in the Sagittarius dwarf tidal stream

    International Nuclear Information System (INIS)

    Freese, Katherine; Gondolo, Paolo; Newberg, Heidi Jo

    2005-01-01

    Tidal streams of the Sagittarius dwarf spheroidal galaxy (Sgr) may be showering dark matter onto the solar system and contributing ∼(0.3-23)% of the local density of our galactic halo. If the Sagittarius galaxy contains dark matter in the form of weakly interacting massive particles (WIMPs), the extra contribution from the stream gives rise to a steplike feature in the energy recoil spectrum in direct dark matter detection. For our best estimate of stream velocity (300 km/s) and direction (the plane containing the Sgr dwarf and its debris), the count rate is maximum on June 28 and minimum on December 27 (for most recoil energies), and the location of the step oscillates yearly with a phase opposite to that of the count rate. In the CDMS experiment, for 60 GeV WIMPs, the location of the step oscillates between 35 and 42 keV, and for the most favorable stream density, the stream should be detectable at the 11σ level in four years of data with 10 keV energy bins. Planned large detectors like XENON, CryoArray, and the directional detector DRIFT may also be able to identify the Sgr stream

  10. Research and implementation on improving I/O performance of streaming media storage system

    Science.gov (United States)

    Lu, Zheng-wu; Wang, Yu-de; Jiang, Guo-song

    2008-12-01

    In this paper, we study the special requirements of a special storage system: streaming media server, and propose a solution to improve I/O performance of RAID storage system. The solution is suitable for streaming media applications. A streaming media storage subsystem includes the I/O interfaces, RAID arrays, I/O scheduling and device drivers. The solution is implemented on the top of the storage subsystem I/O Interface. Storage subsystem is the performance bottlenecks of a streaming media system, and I/O interface directly affect the performance of the storage subsystem. According to theoretical analysis, 64 KB block-size is most appropriate for streaming media applications. We carry out experiment in detail, and verified that the proper block-size really is 64KB. It is in accordance with our analysis. The experiment results also show that by using DMA controller, efficient memory management technology and mailbox interface design mechanism, streaming media storage system achieves a high-speed data throughput.

  11. Interplanetary stream magnetism: Kinematic effects

    International Nuclear Information System (INIS)

    Burlaga, L.F.; Barouch, E.

    1976-01-01

    The particle density, and the magnetic field intensity and direction, are calculated for volume elements of the solar wind as a function of the initial magnetic field direction, Phi 0 , and the initial speed gradient, (deltaV/deltaR) 0 . It is assumed that the velocity is constant and radial. These assumptions are approximately valid between approx.0.1 and 1.0 AU for many streams. Time profiles of n, B, and V are calculated for corotating streams, neglecting effects of pressure gradients. The compression and rarefaction of B depend sensitively on Phi 0 . By averaging over a typical stream, it is found that approx.r -2 , whereas does not vary in a simple way, consistent with deep space observations. Changes of field direction may be very large, depending on the initial angle; but when the initial angle at 0.1 Au is such that the base of the field line corotates with the Sun, the spiral angle is the preferred direction at 1 AU. The theory is also applicable to nonstationary flows

  12. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  13. Decentralized Cloud Method For Multicasting Media Stream

    Directory of Open Access Journals (Sweden)

    D M B N Bandara

    2015-08-01

    Full Text Available With the advancement of Information technology the concept of idea sharing has advanced. Mostly on presentations personal computer and projector have become essentials. But on most occasions for connecting these equipment cables and physical devices are used. This is inefficient and time consuming. If a problem occurs someone with technical knowledge is necessary to solve the situation. The objective of this research is to use the wireless technology to reduce the manual configuration and build up a platform where one can easily share files a visuals media and feedback. A system has been developed to detect all the devices over a network and upon granted permission will share video audio and access controls. Final outcome of the research was a collaborative software bundle which work together on a network. One part of the system is a Desktop Network Software. And other is a Mobile Application. Desktop application can detect all other devices in the network which provides the same facility and if required can allocate a group and share its screen files and have a message stream to each device using multicasting. Mobile application can act as a mobile remote to the host computer of the group which can detect any input from user and pass it to the system.

  14. Josephson junctions array resonators

    Energy Technology Data Exchange (ETDEWEB)

    Gargiulo, Oscar; Muppalla, Phani; Mirzaei, Iman; Kirchmair, Gerhard [Institute for Quantum Optics and Quantum Information, Innsbruck (Austria)

    2016-07-01

    We present an experimental analysis of the self- and cross-Kerr effect of extended plasma resonances in Josephson junction chains. The chain consists of 1600 individual junctions and we can measure quality factors in excess of 10000. The Kerr effect manifests itself as a frequency shift that depends linearly on the number of photons in a resonant mode. By changing the input power we are able to measure this frequency shift on a single mode (self-kerr). By changing the input power on another mode while measuring the same one, we are able to evaluate the cross-kerr effect. We can measure the cross-Kerr effect by probing the resonance frequency of one mode while exciting another mode of the array with a microwave drive.

  15. Diagnosable structured logic array

    Science.gov (United States)

    Whitaker, Sterling (Inventor); Miles, Lowell (Inventor); Gambles, Jody (Inventor); Maki, Gary K. (Inventor)

    2009-01-01

    A diagnosable structured logic array and associated process is provided. A base cell structure is provided comprising a logic unit comprising a plurality of input nodes, a plurality of selection nodes, and an output node, a plurality of switches coupled to the selection nodes, where the switches comprises a plurality of input lines, a selection line and an output line, a memory cell coupled to the output node, and a test address bus and a program control bus coupled to the plurality of input lines and the selection line of the plurality of switches. A state on each of the plurality of input nodes is verifiably loaded and read from the memory cell. A trusted memory block is provided. The associated process is provided for testing and verifying a plurality of truth table inputs of the logic unit.

  16. Low Frequency Space Array

    International Nuclear Information System (INIS)

    Dennison, B.; Weiler, K.W.; Johnston, K.J.

    1987-01-01

    The Low Frequency Space Array (LFSA) is a conceptual mission to survey the entire sky and to image individual sources at frequencies between 1.5 and 26 MHz, a frequency range over which the earth's ionosphere transmits poorly or not at all. With high resolution, high sensitivity observations, a new window will be opened in the electromagnetic spectrum for astronomical investigation. Also, extending observations down to such low frequencies will bring astronomy to the fundamental limit below which the galaxy becomes optically thick due to free-free absorption. A number of major scientific goals can be pursued with such a mission, including mapping galactic emission and absorption, studies of individual source spectra in a frequency range where a number of important processes may play a role, high resolution imaging of extended sources, localization of the impulsive emission from Jupiter, and a search for coherent emission processes. 19 references

  17. Scintillator detector array

    International Nuclear Information System (INIS)

    Cusano, D.A.; Dibianca, F.A.

    1981-01-01

    This patent application relates to a scintillator detector array for use in computerized tomography and comprises a housing including a plurality of chambers, the said housing having a front wall transmissive to x-rays and side walls opaque to x-rays, such as of tungsten and tantalum, a liquid scintillation medium including a soluble fluor, the solvent for the fluor being disposed in the chambers. The solvent comprises either an intrinsically high Z solvent or a solvent which has dissolved therein a high Z compound e.g. iodo or bromonaphthalene; or toluene, xylene or trimethylbenzene with a lead or tin alkyl dissolved therein. Also disposed about the chambers are a plurality of photoelectric devices. (author)

  18. A fast, exact code for scattered thermal radiation compared with a two-stream approximation

    International Nuclear Information System (INIS)

    Cogley, A.C.; Pandey, D.K.

    1980-01-01

    A two-stream accuracy study for internally (thermal) driven problems is presented by comparison with a recently developed 'exact' adding/doubling method. The resulting errors in external (or boundary) radiative intensity and flux are usually larger than those for the externally driven problems and vary substantially with the radiative parameters. Error predictions for a specific problem are difficult. An unexpected result is that the exact method is computationally as fast as the two-stream approximation for nonisothermal media

  19. Layout Study and Application of Mobile App Recommendation Approach Based On Spark Streaming Framework

    Science.gov (United States)

    Wang, H. T.; Chen, T. T.; Yan, C.; Pan, H.

    2018-05-01

    For App recommended areas of mobile phone software, made while using conduct App application recommended combined weighted Slope One algorithm collaborative filtering algorithm items based on further improvement of the traditional collaborative filtering algorithm in cold start, data matrix sparseness and other issues, will recommend Spark stasis parallel algorithm platform, the introduction of real-time streaming streaming real-time computing framework to improve real-time software applications recommended.

  20. Acoustic streaming in pulsating flows through porous media

    International Nuclear Information System (INIS)

    Valverde, J.M.; Dura'n-Olivencia, F.J.

    2014-01-01

    When a body immersed in a viscous fluid is subjected to a sound wave (or, equivalently, the body oscillates in the fluid otherwise at rest) a rotational fluid stream develops across a boundary layer nearby the fluid-body interphase. This so-called acoustic streaming phenomenon is responsible for a notable enhancement of heat, mass and momentum transfer and takes place in any process involving two phases subjected to relative oscillations. Understanding the fundamental mechanisms governing acoustic streaming in two-phase flows is of great interest for a wide range of applications such as sonoprocessed fluidized bed reactors, thermoacoustic refrigerators/engines, pulsatile flows through veins/arteries, hemodialysis devices, pipes in off-shore platforms, offshore piers, vibrating structures in the power-generating industry, lab-on-a-chip microfluidics and microgravity acoustic levitation, and solar thermal collectors to name a few. The aim of engineering studies on this vast diversity of systems is oriented towards maximizing the efficiency of each particular process. Even though practical problems are usually approached from disparate disciplines without any apparent linkage, the behavior of these systems is influenced by the same underlying physics. In general, acoustic streaming occurs within the interstices of porous media and usually in the presence of externally imposed steady fluid flows, which gives rise to important effects arising from the interference between viscous boundary layers developed around nearby solid surfaces and the nonlinear coupling between the oscillating and steady flows. This paper is mainly devoted to highlighting the fundamental physics behind acoustic streaming in porous media in order to provide a simple instrument to assess the relevance of this phenomenon in each particular application. The exact microscopic Navier-Stokes equations will be numerically solved for a simplified 2D system consisting of a regular array of oscillating

  1. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes...

  2. Networked Sensor Arrays

    International Nuclear Information System (INIS)

    Tighe, R. J.

    2002-01-01

    A set of independent radiation sensors, coupled with real-time data telemetry, offers the opportunity to run correlation algorithms for the sensor array as well as to incorporate non-radiological data into the system. This may enhance the overall sensitivity of the sensors and provide an opportunity to project the location of a source within the array. In collaboration with Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories (SNL), we have conducted field experiments to test a prototype system. Combining the outputs of a set of distributed sensors permits the correlation that the independent sensor outputs. Combined with additional information such as traffic patterns and velocities, this can reduce random/false detections and enhance detection capability. The principle components of such a system include: (1) A set of radiation sensors. These may be of varying type and complexity, including gamma and/or neutron detectors, gross count and spectral-capable sensors, and low to high energy-resolution sensors. (2) A set of non-radiation sensors. These may include sensors such as vehicle presence and imaging sensors. (3) A communications architecture for near real-time telemetry. Depending upon existing infrastructure and bandwidth requirements, this may be a radio or hard-wire based system. (4) A central command console to pole the sensors, correlate their output, and display the data in a meaningful form to the system operator. Both sensitivity and selectivity are important considerations when evaluating the performance of a detection system. Depending on the application, the optimization of sensitivity as well as the rejection of ''nuisance'' radioactive sources may or may not be critical

  3. Abductive Inference using Array-Based Logic

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Falster, Peter; Møller, Gert L.

    The notion of abduction has found its usage within a wide variety of AI fields. Computing abductive solutions has, however, shown to be highly intractable in logic programming. To avoid this intractability we present a new approach to logicbased abduction; through the geometrical view of data...... employed in array-based logic we embrace abduction in a simple structural operation. We argue that a theory of abduction on this form allows for an implementation which, at runtime, can perform abductive inference quite efficiently on arbitrary rules of logic representing knowledge of finite domains....

  4. Layout Optimisation of Wave Energy Converter Arrays

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Nava, Vincenzo; Topper, Mathew B. R.

    2017-01-01

    This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC) arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation......, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA), a genetic algorithm (GA) and the glowworm swarm optimisation (GSO) algorithm...

  5. Design of 3x3 Focusing Array for Heavy Ion Driver Final Report on CRADA TC-02082-04

    Energy Technology Data Exchange (ETDEWEB)

    Martovetsky, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-28

    This memo presents a design of a 3x3 quadrupole array for HIF. It contains 3 D magnetic field computations of the array build with racetrack coils with and without different shields. It is shown that it is possible to have a low error magnetic field in the cells and shield the stray fields to acceptable levels. The array design seems to be a practical solution to any size array for future multi-beam heavy ion fusion drivers.

  6. Characterizing Milky Way Tidal Streams and Dark Matter with MilkyWay@home

    Science.gov (United States)

    Newberg, Heidi Jo; Shelton, Siddhartha; Weiss, Jake

    2018-01-01

    MilkyWay@home is a 0.5 PetaFLOPS volunteer computing platform that is mapping out the density substructure of the Sagittarius Dwarf Tidal Stream, the so-called bifurcated portion of the Sagittarius Stream, and the Virgo Overdensity, using turnoff stars from the Sloan Digital Sky Survey. It is also using the density of stars along tidal streams such as the Orphan Stream to constrain properties of the dwarf galaxy progenitor of this stream, including the dark matter portion. Both of these programs are enabled by a specially-built optimization package that uses differential evolution or particle swarm methods to find the optimal model parameters to fit a set of data. To fit the density of tidal streams, 20 parameters are simultaneously fit to each 2.5-degree-wide stripe of SDSS data. Five parameters describing the stellar and dark matter profile of the Orphan Stream progenitor and the time that the dwarf galaxy has been evolved through the Galactic potential are used in an n-body simulation that is then fit to observations of the Orphan Stream. New results from MilkyWay@home will be presented. This project was supported by NSF grant AST 16-15688, the NASA/NY Space Grant fellowship, and contributions made by The Marvin Clan, Babette Josephs, Manit Limlamai, and the 2015 Crowd Funding Campaign to Support Milky Way Research.

  7. A New Streamflow-Routing (SFR1) Package to Simulate Stream-Aquifer Interaction with MODFLOW-2000

    Science.gov (United States)

    Prudic, David E.; Konikow, Leonard F.; Banta, Edward R.

    2004-01-01

    The increasing concern for water and its quality require improved methods to evaluate the interaction between streams and aquifers and the strong influence that streams can have on the flow and transport of contaminants through many aquifers. For this reason, a new Streamflow-Routing (SFR1) Package was written for use with the U.S. Geological Survey's MODFLOW-2000 ground-water flow model. The SFR1 Package is linked to the Lake (LAK3) Package, and both have been integrated with the Ground-Water Transport (GWT) Process of MODFLOW-2000 (MODFLOW-GWT). SFR1 replaces the previous Stream (STR1) Package, with the most important difference being that stream depth is computed at the midpoint of each reach instead of at the beginning of each reach, as was done in the original Stream Package. This approach allows for the addition and subtraction of water from runoff, precipitation, and evapotranspiration within each reach. Because the SFR1 Package computes stream depth differently than that for the original package, a different name was used to distinguish it from the original Stream (STR1) Package. The SFR1 Package has five options for simulating stream depth and four options for computing diversions from a stream. The options for computing stream depth are: a specified value; Manning's equation (using a wide rectangular channel or an eight-point cross section); a power equation; or a table of values that relate flow to depth and width. Each stream segment can have a different option. Outflow from lakes can be computed using the same options. Because the wetted perimeter is computed for the eight-point cross section and width is computed for the power equation and table of values, the streambed conductance term no longer needs to be calculated externally whenever the area of streambed changes as a function of flow. The concentration of solute is computed in a stream network when MODFLOW-GWT is used in conjunction with the SFR1 Package. The concentration of a solute in a

  8. innovation in radioactive waste water-stream management

    International Nuclear Information System (INIS)

    Shaaban, D.A.E.F.

    2010-01-01

    treatment of radioactive waste dtreams is receiving considereble attention in most countries. the present work is for the radioactive wastewater stream management, by volume reduction by a mutual heating and humidificaction of a compressed dry air introduced through the wastewater. in the present work, a mathematical model describing the volume reduction by at the optimum operating condition is determined. a set of coupled first order differential equations, obtained through the mass and energy conservations laws, are used to obtain the humidity ratio, water diffused to the air stream, water temperature, and humid air stream temperature distributions through the bubbling column. these coupled differential equations are simulataneously solved numerically by the developed computer program using fourth order rung-kutta method. the results obtained, according to the present mathematical model, revealed that the air bubble state variables such as mass transfer coefficient (K G ) and interfacial area (a) have a strong effect on the process. therefore, the behavior of the air bubble state variables with coulmn height can be predicted and optimized. moreover, the design curves of the volumetric reduction of the wastewater streams are obtained and assessed at the different operating conditions. an experimental setup was constructed to verify the suggested model. comperhensive comparison between suggested model results, recent experimental measurements and the results of previous work was carried out

  9. Experimental and numerical study of a flapping tidal stream generator

    Science.gov (United States)

    Kim, Jihoon; Le, Tuyen Quang; Ko, Jin Hwan; Sitorus, Patar Ebenezer; Tambunan, Indra Hartarto; Kang, Taesam

    2017-11-01

    The tidal stream turbine is one of the systems that extract kinetic energy from tidal stream, and there are several types of the tidal stream turbine depending on its operating motion. In this research, we conduct experimental and consecutive numerical analyses of a flapping tidal stream generator with a dual configuration flappers. An experimental analysis of a small-scale prototype is conducted in a towing tank, and a numerical analysis is conducted using two-dimensional computational fluid dynamics simulations with an in-house code. Through an experimental analysis conducted while varying these factors, a high applied load and a high input arm angle were found to be advantageous. In consecutive numerical investigations with the kinematics selected from the experiments, it was found that a rear-swing flapper contributes to the total amount of power more than a front-swing flapper with a distance of two times the chord length and with a 90-degree phase difference between the two. This research was a part of the project titled `R&D center for underwater construction robotics', funded by the Ministry of Oceans and Fisheries(MOF), Korea Institute of Marine Science & Technology Promotion(KIMST,PJT200539), and Pohang City in Korea.

  10. Device interactions in reducing the cost of tidal stream energy

    International Nuclear Information System (INIS)

    Vazquez, A.; Iglesias, G.

    2015-01-01

    Highlights: • Numerical modelling is used to estimate the levelised cost of tidal stream energy. • As a case study, a model of Lynmouth (UK) is implemented and successfully validated. • The resolution of the model allows the demarcation of individual devices on the model grid. • Device interactions reduce the available tidal resource and the cost increases significantly. - Abstract: The levelised cost of energy takes into account the lifetime generated energy and the costs associated with a project. The objective of this work is to investigate the effects of device interactions on the energy output and, therefore, on the levelised cost of energy of a tidal stream project, by means of numerical modelling. For this purpose, a case study is considered: Lynmouth (North Devon, UK), an area in the Bristol Channel in which the first tidal stream turbine was installed − a testimony of its potential as a tidal energy site. A state-of-the-art hydrodynamics model is implemented on a high-resolution computational grid, which allows the demarcation of the individual devices. The modification to the energy output resulting from interaction between turbines within the tidal farm is thus resolved for each individual turbine. The results indicate that significant changes in the levelised cost of energy values, of up to £0.221 kW h −1 , occur due to the aforementioned modifications, which should not be disregarded if the cost of tidal stream energy is to be minimised

  11. Optimizing the performance of streaming numerical kernels on the IBM Blue Gene/P PowerPC 450 processor

    KAUST Repository

    Malas, Tareq Majed Yasin; Ahmadia, Aron; Brown, Jed; Gunnels, John A.; Keyes, David E.

    2012-01-01

    Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution

  12. Global perspectives on the urban stream syndrome

    Science.gov (United States)

    Roy, Allison; Booth, Derek B.; Capps, Krista A.; Smith, Benjamin

    2016-01-01

    Urban streams commonly express degraded physical, chemical, and biological conditions that have been collectively termed the “urban stream syndrome”. The description of the syndrome highlights the broad similarities among these streams relative to their less-impaired counterparts. Awareness of these commonalities has fostered rapid improvements in the management of urban stormwater for the protection of downstream watercourses, but the focus on the similarities among urban streams has obscured meaningful differences among them. Key drivers of stream responses to urbanization can vary greatly among climatological and physiographic regions of the globe, and the differences can be manifested in individual stream channels even through the homogenizing veneer of urban development. We provide examples of differences in natural hydrologic and geologic settings (within similar regions) that can result in different mechanisms of stream ecosystem response to urbanization and, as such, should lead to different management approaches. The idea that all urban streams can be cured using the same treatment is simplistic, but overemphasizing the tremendous differences among natural (or human-altered) systems also can paralyze management. Thoughtful integration of work that recognizes the commonalities of the urban stream syndrome across the globe has benefitted urban stream management. Now we call for a more nuanced understanding of the regional, subregional, and local attributes of any given urban stream and its watershed to advance the physical, chemical, and ecological recovery of these systems.

  13. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim

    2016-05-11

    Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The

  14. A Mechanism for Cytoplasmic Streaming: Kinesin-Driven Alignment of Microtubules and Fast Fluid Flows.

    Science.gov (United States)

    Monteith, Corey E; Brunner, Matthew E; Djagaeva, Inna; Bielecki, Anthony M; Deutsch, Joshua M; Saxton, William M

    2016-05-10

    The transport of cytoplasmic components can be profoundly affected by hydrodynamics. Cytoplasmic streaming in Drosophila oocytes offers a striking example. Forces on fluid from kinesin-1 are initially directed by a disordered meshwork of microtubules, generating minor slow cytoplasmic flows. Subsequently, to mix incoming nurse cell cytoplasm with ooplasm, a subcortical layer of microtubules forms parallel arrays that support long-range, fast flows. To analyze the streaming mechanism, we combined observations of microtubule and organelle motions with detailed mathematical modeling. In the fast state, microtubules tethered to the cortex form a thin subcortical layer and undergo correlated sinusoidal bending. Organelles moving in flows along the arrays show velocities that are slow near the cortex and fast on the inward side of the subcortical microtubule layer. Starting with fundamental physical principles suggested by qualitative hypotheses, and with published values for microtubule stiffness, kinesin velocity, and cytoplasmic viscosity, we developed a quantitative coupled hydrodynamic model for streaming. The fully detailed mathematical model and its simulations identify key variables that can shift the system between disordered (slow) and ordered (fast) states. Measurements of array curvature, wave period, and the effects of diminished kinesin velocity on flow rates, as well as prior observations on f-actin perturbation, support the model. This establishes a concrete mechanistic framework for the ooplasmic streaming process. The self-organizing fast phase is a result of viscous drag on kinesin-driven cargoes that mediates equal and opposite forces on cytoplasmic fluid and on microtubules whose minus ends are tethered to the cortex. Fluid moves toward plus ends and microtubules are forced backward toward their minus ends, resulting in buckling. Under certain conditions, the buckling microtubules self-organize into parallel bending arrays, guiding varying directions

  15. Drug perfusion enhancement in tissue model by steady streaming induced by oscillating microbubbles.

    Science.gov (United States)

    Oh, Jin Sun; Kwon, Yong Seok; Lee, Kyung Ho; Jeong, Woowon; Chung, Sang Kug; Rhee, Kyehan

    2014-01-01

    Drug delivery into neurological tissue is challenging because of the low tissue permeability. Ultrasound incorporating microbubbles has been applied to enhance drug delivery into these tissues, but the effects of a streaming flow by microbubble oscillation on drug perfusion have not been elucidated. In order to clarify the physical effects of steady streaming on drug delivery, an experimental study on dye perfusion into a tissue model was performed using microbubbles excited by acoustic waves. The surface concentration and penetration length of the drug were increased by 12% and 13%, respectively, with streaming flow. The mass of dye perfused into a tissue phantom for 30s was increased by about 20% in the phantom with oscillating bubbles. A computational model that considers fluid structure interaction for streaming flow fields induced by oscillating bubbles was developed, and mass transfer of the drug into the porous tissue model was analyzed. The computed flow fields agreed with the theoretical solutions, and the dye concentration distribution in the tissue agreed well with the experimental data. The computational results showed that steady streaming with a streaming velocity of a few millimeters per second promotes mass transfer into a tissue. © 2013 Published by Elsevier Ltd.

  16. Programmable architecture for quantum computing

    NARCIS (Netherlands)

    Chen, J.; Wang, L.; Charbon, E.; Wang, B.

    2013-01-01

    A programmable architecture called “quantum FPGA (field-programmable gate array)” (QFPGA) is presented for quantum computing, which is a hybrid model combining the advantages of the qubus system and the measurement-based quantum computation. There are two kinds of buses in QFPGA, the local bus and

  17. Array processors: an introduction to their architecture, software, and applications in nuclear medicine

    International Nuclear Information System (INIS)

    King, M.A.; Doherty, P.W.; Rosenberg, R.J.; Cool, S.L.

    1983-01-01

    Array processors are ''number crunchers'' that dramatically enhance the processing power of nuclear medicine computer systems for applicatons dealing with the repetitive operations involved in digital image processing of large segments of data. The general architecture and the programming of array processors are introduced, along with some applications of array processors to the reconstruction of emission tomographic images, digital image enhancement, and functional image formation

  18. Sampling phased array a new technique for signal processing and ultrasonic imaging

    OpenAIRE

    Bulavinov, A.; Joneit, D.; Kröning, M.; Bernus, L.; Dalichow, M.H.; Reddy, K.M.

    2006-01-01

    Different signal processing and image reconstruction techniques are applied in ultrasonic non-destructive material evaluation. In recent years, rapid development in the fields of microelectronics and computer engineering lead to wide application of phased array systems. A new phased array technique, called "Sampling Phased Array" has been developed in Fraunhofer Institute for non-destructive testing. It realizes unique approach of measurement and processing of ultrasonic signals. The sampling...

  19. Piezo-Phototronic Enhanced UV Sensing Based on a Nanowire Photodetector Array.

    Science.gov (United States)

    Han, Xun; Du, Weiming; Yu, Ruomeng; Pan, Caofeng; Wang, Zhong Lin

    2015-12-22

    A large array of Schottky UV photodetectors (PDs) based on vertical aligned ZnO nanowires is achieved. By introducing the piezo-phototronic effect, the performance of the PD array is enhanced up to seven times in photoreponsivity, six times in sensitivity, and 2.8 times in detection limit. The UV PD array may have applications in optoelectronic systems, adaptive optical computing, and communication. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Self-assembled ordered carbon-nanotube arrays and membranes.

    Energy Technology Data Exchange (ETDEWEB)

    Overmyer, Donald L.; Siegal, Michael P.; Yelton, William Graham

    2004-11-01

    Imagine free-standing flexible membranes with highly-aligned arrays of carbon nanotubes (CNTs) running through their thickness. Perhaps with both ends of the CNTs open for highly controlled nanofiltration? Or CNTs at heights uniformly above a polymer membrane for a flexible array of nanoelectrodes or field-emitters? How about CNT films with incredible amounts of accessible surface area for analyte adsorption? These self-assembled crystalline nanotubes consist of multiple layers of graphene sheets rolled into concentric cylinders. Tube diameters (3-300 nm), inner-bore diameters (2-15 nm), and lengths (nanometers - microns) are controlled to tailor physical, mechanical, and chemical properties. We proposed to explore growth and characterize nanotube arrays to help determine their exciting functionality for Sandia applications. Thermal chemical vapor deposition growth in a furnace nucleates from a metal catalyst. Ordered arrays grow using templates from self-assembled hexagonal arrays of nanopores in anodized-aluminum oxide. Polymeric-binders can mechanically hold the CNTs in place for polishing, lift-off, and membrane formation. The stiffness, electrical and thermal conductivities of CNTs make them ideally suited for a wide-variety of possible applications. Large-area, highly-accessible gas-adsorbing carbon surfaces, superb cold-cathode field-emission, and unique nanoscale geometries can lead to advanced microsensors using analyte adsorption, arrays of functionalized nanoelectrodes for enhanced electrochemical detection of biological/explosive compounds, or mass-ionizers for gas-phase detection. Materials studies involving membrane formation may lead to exciting breakthroughs in nanofiltration/nanochromatography for the separation of chemical and biological agents. With controlled nanofilter sizes, ultrafiltration will be viable to separate and preconcentrate viruses and many strains of bacteria for 'down-stream' analysis.