WorldWideScience

Sample records for streaming array computer

  1. Battling memory requirements of array programming through streaming

    DEFF Research Database (Denmark)

    Kristensen, Mads Ruben Burgdorff; Avery, James Emil; Blum, Troels

    2016-01-01

    A barrier to efficient array programming, for example in Python/NumPy, is that algorithms written as pure array operations completely without loops, while most efficient on small input, can lead to explosions in memory use. The present paper presents a solution to this problem using array streaming......, implemented in the automatic parallelization high-performance framework Bohrium. This makes it possible to use array programming in Python/NumPy code directly, even when the apparent memory requirement exceeds the machine capacity, since the automatic streaming eliminates the temporary memory overhead...... by performing calculations in per-thread registers. Using Bohrium, we automatically fuse, JIT-compile, and execute NumPy array operations on GPGPUs without modification to the user programs. We present performance evaluations of three benchmarks, all of which show dramatic reductions in memory use from...

  2. A Compute Environment of ABC95 Array Computer Based on Multi-FPGA Chip

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    ABC95 array computer is a multi-function network's computer based on FPGA technology, The multi-function network supports processors conflict-free access data from memory and supports processors access data from processors based on enhanced MESH network.ABC95 instruction's system includes control instructions, scalar instructions, vectors instructions.Mostly net-work instructions are introduced.A programming environment of ABC95 array computer assemble language is designed.A programming environment of ABC95 array computer for VC++ is advanced.It includes load function of ABC95 array computer program and data, store function, run function and so on.Specially, The data type of ABC95 array computer conflict-free access is defined.The results show that these technologies can develop programmer of ABC95 array computer effectively.

  3. ArrayBridge: Interweaving declarative array processing with high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Haoyuan [The Ohio State Univ., Columbus, OH (United States); Floratos, Sofoklis [The Ohio State Univ., Columbus, OH (United States); Blanas, Spyros [The Ohio State Univ., Columbus, OH (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Prabhat [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Paul [Paradigm4, Inc., Waltham, MA (United States)

    2017-05-04

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.

  4. Experimental investigations of ablation stream interaction dynamics in tungsten wire arrays: Interpenetration, magnetic field advection, and ion deflection

    Energy Technology Data Exchange (ETDEWEB)

    Swadling, G. F.; Lebedev, S. V.; Hall, G. N.; Suzuki-Vidal, F.; Burdiak, G. C.; Pickworth, L.; De Grouchy, P.; Skidmore, J.; Khoory, E.; Suttle, L.; Bennett, M.; Hare, J. D.; Clayson, T.; Bland, S. N.; Smith, R. A.; Stuart, N. H.; Patankar, S.; Robinson, T. S. [Blackett Laboratory, Imperial College, London SW7 2BW (United Kingdom); Harvey-Thompson, A. J. [Sandia National Laboratories, PO Box 5800, Albuquerque, New Mexico 87185-1193 (United States); Rozmus, W. [Department of Physics, University of Alberta, Edmonton, Alberta T6G 2J1 (Canada); and others

    2016-05-15

    Experiments have been carried out to investigate the collisional dynamics of ablation streams produced by cylindrical wire array z-pinches. A combination of laser interferometric imaging, Thomson scattering, and Faraday rotation imaging has been used to make a range of measurements of the temporal evolution of various plasma and flow parameters. This paper presents a summary of previously published data, drawing together a range of different measurements in order to give an overview of the key results. The paper focuses mainly on the results of experiments with tungsten wire arrays. Early interferometric imaging measurements are reviewed, then more recent Thomson scattering measurements are discussed; these measurements provided the first direct evidence of ablation stream interpenetration in a wire array experiment. Combining the data from these experiments gives a view of the temporal evolution of the tungsten stream collisional dynamics. In the final part of the paper, we present new experimental measurements made using an imaging Faraday rotation diagnostic. These experiments investigated the structure of magnetic fields near the array axis directly; the presence of a magnetic field has previously been inferred based on Thomson scattering measurements of ion deflection near the array axis. Although the Thomson and Faraday measurements are not in full quantitative agreement, the Faraday data do qualitatively supports the conjecture that the observed deflections are induced by a static toroidal magnetic field, which has been advected to the array axis by the ablation streams. It is likely that detailed modeling will be needed in order to fully understand the dynamics observed in the experiment.

  5. Field computation for two-dimensional array transducers with limited diffraction array beams.

    Science.gov (United States)

    Lu, Jian-Yu; Cheng, Jiqi

    2005-10-01

    A method is developed for calculating fields produced with a two-dimensional (2D) array transducer. This method decomposes an arbitrary 2D aperture weighting function into a set of limited diffraction array beams. Using the analytical expressions of limited diffraction beams, arbitrary continuous wave (cw) or pulse wave (pw) fields of 2D arrays can be obtained with a simple superposition of these beams. In addition, this method can be simplified and applied to a 1D array transducer of a finite or infinite elevation height. For beams produced with axially symmetric aperture weighting functions, this method can be reduced to the Fourier-Bessel method studied previously where an annular array transducer can be used. The advantage of the method is that it is accurate and computationally efficient, especially in regions that are not far from the surface of the transducer (near field), where it is important for medical imaging. Both computer simulations and a synthetic array experiment are carried out to verify the method. Results (Bessel beam, focused Gaussian beam, X wave and asymmetric array beams) show that the method is accurate as compared to that using the Rayleigh-Sommerfeld diffraction formula and agrees well with the experiment.

  6. Null stream analysis of Pulsar Timing Array data: localisation of resolvable gravitational wave sources

    Science.gov (United States)

    Goldstein, Janna; Veitch, John; Sesana, Alberto; Vecchio, Alberto

    2018-04-01

    Super-massive black hole binaries are expected to produce a gravitational wave (GW) signal in the nano-Hertz frequency band which may be detected by pulsar timing arrays (PTAs) in the coming years. The signal is composed of both stochastic and individually resolvable components. Here we develop a generic Bayesian method for the analysis of resolvable sources based on the construction of `null-streams' which cancel the part of the signal held in common for each pulsar (the Earth-term). For an array of N pulsars there are N - 2 independent null-streams that cancel the GW signal from a particular sky location. This method is applied to the localisation of quasi-circular binaries undergoing adiabatic inspiral. We carry out a systematic investigation of the scaling of the localisation accuracy with signal strength and number of pulsars in the PTA. Additionally, we find that source sky localisation with the International PTA data release one is vastly superior than what is achieved by its constituent regional PTAs.

  7. Programmable stream prefetch with resource optimization

    Science.gov (United States)

    Boyle, Peter; Christ, Norman; Gara, Alan; Mawhinney, Robert; Ohmacht, Martin; Sugavanam, Krishnan

    2013-01-08

    A stream prefetch engine performs data retrieval in a parallel computing system. The engine receives a load request from at least one processor. The engine evaluates whether a first memory address requested in the load request is present and valid in a table. The engine checks whether there exists valid data corresponding to the first memory address in an array if the first memory address is present and valid in the table. The engine increments a prefetching depth of a first stream that the first memory address belongs to and fetching a cache line associated with the first memory address from the at least one cache memory device if there is not yet valid data corresponding to the first memory address in the array. The engine determines whether prefetching of additional data is needed for the first stream within its prefetching depth. The engine prefetches the additional data if the prefetching is needed.

  8. Symbol Stream Combining Versus Baseband Combining for Telemetry Arraying

    Science.gov (United States)

    Divsalar, D.

    1983-01-01

    The objectives of this article are to investigate and analyze the problem of combining symbol streams from many Deep Space Network stations to enhance bit signal-to-noise ratio and to compare the performance of this combining technique with baseband combining. Symbol stream combining (SSC) has some advantages and some disadvantages over baseband combining (BBC). The SSC suffers almost no loss in combining the digital data and no loss due to the transmission of the digital data by microwave links between the stations. The BBC suffers 0.2 dB loss due to alignment and combining the IF signals and 0.2 dB loss due to transmission of signals by microwave links. On the other hand, the losses in the subcarrier demodulation assembly (SDA) and in the symbol synchronization assembly (SSA) for SSC are more than the losses in the SDA and SSA for BBC. It is shown that SSC outperforms BBC by about 0.35 dB (in terms of the required bit energy-to-noise spectral density for a bit error rate of 1,000) for an array of three DSN antennas, namely 64 m, 34m(T/R) and 34m(R).

  9. A MULTICORE COMPUTER SYSTEM FOR DESIGN OF STREAM CIPHERS BASED ON RANDOM FEEDBACK

    Directory of Open Access Journals (Sweden)

    Borislav BEDZHEV

    2013-01-01

    Full Text Available The stream ciphers are an important tool for providing information security in the present communication and computer networks. Due to this reason our paper describes a multicore computer system for design of stream ciphers based on the so - named random feedback shift registers (RFSRs. The interest to this theme is inspired by the following facts. First, the RFSRs are a relatively new type of stream ciphers which demonstrate a significant enhancement of the crypto - resistance in a comparison with the classical stream ciphers. Second, the studding of the features of the RFSRs is in very initial stage. Third, the theory of the RFSRs seems to be very hard, which leads to the necessity RFSRs to be explored mainly by the means of computer models. The paper is organized as follows. First, the basics of the RFSRs are recalled. After that, our multicore computer system for design of stream ciphers based on RFSRs is presented. Finally, the advantages and possible areas of application of the computer system are discussed.

  10. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  11. A fast computation method for MUSIC spectrum function based on circular arrays

    Science.gov (United States)

    Du, Zhengdong; Wei, Ping

    2015-02-01

    The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.

  12. Java parallel secure stream for grid computing

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Chen, Y.; Watson, W.

    2001-01-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. The authors present a pure Java package called JPARSS (Java Parallel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addition X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed

  13. Bringing Legacy Visualization Software to Modern Computing Devices via Application Streaming

    Science.gov (United States)

    Fisher, Ward

    2014-05-01

    Planning software compatibility across forthcoming generations of computing platforms is a problem commonly encountered in software engineering and development. While this problem can affect any class of software, data analysis and visualization programs are particularly vulnerable. This is due in part to their inherent dependency on specialized hardware and computing environments. A number of strategies and tools have been designed to aid software engineers with this task. While generally embraced by developers at 'traditional' software companies, these methodologies are often dismissed by the scientific software community as unwieldy, inefficient and unnecessary. As a result, many important and storied scientific software packages can struggle to adapt to a new computing environment; for example, one in which much work is carried out on sub-laptop devices (such as tablets and smartphones). Rewriting these packages for a new platform often requires significant investment in terms of development time and developer expertise. In many cases, porting older software to modern devices is neither practical nor possible. As a result, replacement software must be developed from scratch, wasting resources better spent on other projects. Enabled largely by the rapid rise and adoption of cloud computing platforms, 'Application Streaming' technologies allow legacy visualization and analysis software to be operated wholly from a client device (be it laptop, tablet or smartphone) while retaining full functionality and interactivity. It mitigates much of the developer effort required by other more traditional methods while simultaneously reducing the time it takes to bring the software to a new platform. This work will provide an overview of Application Streaming and how it compares against other technologies which allow scientific visualization software to be executed from a remote computer. We will discuss the functionality and limitations of existing application streaming

  14. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  15. Tests Of Array Of Flush Pressure Sensors

    Science.gov (United States)

    Larson, Larry J.; Moes, Timothy R.; Siemers, Paul M., III

    1992-01-01

    Report describes tests of array of pressure sensors connected to small orifices flush with surface of 1/7-scale model of F-14 airplane in wind tunnel. Part of effort to determine whether pressure parameters consisting of various sums, differences, and ratios of measured pressures used to compute accurately free-stream values of stagnation pressure, static pressure, angle of attack, angle of sideslip, and mach number. Such arrays of sensors and associated processing circuitry integrated into advanced aircraft as parts of flight-monitoring and -controlling systems.

  16. Two Dimensional Array Based Overlay Network for Balancing Load of Peer-to-Peer Live Video Streaming

    International Nuclear Information System (INIS)

    Ibrahimy, Abdullah Faruq Ibn; Rafiqul, Islam Md; Anwar, Farhat; Ibrahimy, Muhammad Ibn

    2013-01-01

    The live video data is streaming usually in a tree-based overlay network or in a mesh-based overlay network. In case of departure of a peer with additional upload bandwidth, the overlay network becomes very vulnerable to churn. In this paper, a two dimensional array-based overlay network is proposed for streaming the live video stream data. As there is always a peer or a live video streaming server to upload the live video stream data, so the overlay network is very stable and very robust to churn. Peers are placed according to their upload and download bandwidth, which enhances the balance of load and performance. The overlay network utilizes the additional upload bandwidth of peers to minimize chunk delivery delay and to maximize balance of load. The procedure, which is used for distributing the additional upload bandwidth of the peers, distributes the additional upload bandwidth to the heterogeneous strength peers in a fair treat distribution approach and to the homogeneous strength peers in a uniform distribution approach. The proposed overlay network has been simulated by Qualnet from Scalable Network Technologies and results are presented in this paper

  17. Two Dimensional Array Based Overlay Network for Balancing Load of Peer-to-Peer Live Video Streaming

    Science.gov (United States)

    Faruq Ibn Ibrahimy, Abdullah; Rafiqul, Islam Md; Anwar, Farhat; Ibn Ibrahimy, Muhammad

    2013-12-01

    The live video data is streaming usually in a tree-based overlay network or in a mesh-based overlay network. In case of departure of a peer with additional upload bandwidth, the overlay network becomes very vulnerable to churn. In this paper, a two dimensional array-based overlay network is proposed for streaming the live video stream data. As there is always a peer or a live video streaming server to upload the live video stream data, so the overlay network is very stable and very robust to churn. Peers are placed according to their upload and download bandwidth, which enhances the balance of load and performance. The overlay network utilizes the additional upload bandwidth of peers to minimize chunk delivery delay and to maximize balance of load. The procedure, which is used for distributing the additional upload bandwidth of the peers, distributes the additional upload bandwidth to the heterogeneous strength peers in a fair treat distribution approach and to the homogeneous strength peers in a uniform distribution approach. The proposed overlay network has been simulated by Qualnet from Scalable Network Technologies and results are presented in this paper.

  18. Benthic invertebrate fauna, small streams

    Science.gov (United States)

    J. Bruce Wallace; S.L. Eggert

    2009-01-01

    Small streams (first- through third-order streams) make up >98% of the total number of stream segments and >86% of stream length in many drainage networks. Small streams occur over a wide array of climates, geology, and biomes, which influence temperature, hydrologic regimes, water chemistry, light, substrate, stream permanence, a basin's terrestrial plant...

  19. Computer-aided engineering system for design of sequence arrays and lithographic masks

    Science.gov (United States)

    Hubbell, Earl A.; Morris, MacDonald S.; Winkler, James L.

    1996-01-01

    An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system (100) is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files (104) to design and/or generate lithographic masks (110).

  20. Symbol-stream Combiner: Description and Demonstration Plans

    Science.gov (United States)

    Hurd, W. J.; Reder, L. J.; Russell, M. D.

    1984-01-01

    A system is described and demonstration plans presented for antenna arraying by symbol stream combining. This system is used to enhance the signal-to-noise ratio of a spacecraft signals by combining the detected symbol streams from two or more receiving stations. Symbol stream combining has both cost and performance advantages over other arraying methods. Demonstrations are planned on Voyager 2 both prior to and during Uranus encounter. Operational use is possible for interagency arraying of non-Deep Space Network stations at Neptune encounter.

  1. Assessment of arrays of in-stream tidal turbines in the Bay of Fundy.

    Science.gov (United States)

    Karsten, Richard; Swan, Amanda; Culina, Joel

    2013-02-28

    Theories of in-stream turbines are adapted to analyse the potential electricity generation and impact of turbine arrays deployed in Minas Passage, Bay of Fundy. Linear momentum actuator disc theory (LMADT) is combined with a theory that calculates the flux through the passage to determine both the turbine power and the impact of rows of turbine fences. For realistically small blockage ratios, the theory predicts that extracting 2000-2500 MW of turbine power will result in a reduction in the flow of less than 5 per cent. The theory also suggests that there is little reason to tune the turbines if the blockage ratio remains small. A turbine array model is derived that extends LMADT by using the velocity field from a numerical simulation of the flow through Minas Passage and modelling the turbine wakes. The model calculates the resulting speed of the flow through and around a turbine array, allowing for the sequential positioning of turbines in regions of strongest flow. The model estimates that over 2000 MW of power is possible with only a 2.5 per cent reduction in the flow. If turbines are restricted to depths less than 50 m, the potential power generation is reduced substantially, down to 300 MW. For large turbine arrays, the blockage ratios remain small and the turbines can produce maximum power with a drag coefficient equal to the Betz-limit value.

  2. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  3. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    Science.gov (United States)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  4. Efficient Processing of Continuous Skyline Query over Smarter Traffic Data Stream for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wang Hanning

    2013-01-01

    Full Text Available The analyzing and processing of multisource real-time transportation data stream lay a foundation for the smart transportation's sensibility, interconnection, integration, and real-time decision making. Strong computing ability and valid mass data management mode provided by the cloud computing, is feasible for handling Skyline continuous query in the mass distributed uncertain transportation data stream. In this paper, we gave architecture of layered smart transportation about data processing, and we formalized the description about continuous query over smart transportation data Skyline. Besides, we proposed mMR-SUDS algorithm (Skyline query algorithm of uncertain transportation stream data based on micro-batchinMap Reduce based on sliding window division and architecture.

  5. Multicriteria Resource Brokering in Cloud Computing for Streaming Service

    Directory of Open Access Journals (Sweden)

    Chih-Lun Chou

    2015-01-01

    Full Text Available By leveraging cloud computing such as Infrastructure as a Service (IaaS, the outsourcing of computing resources used to support operations, including servers, storage, and networking components, is quite beneficial for various providers of Internet application. With this increasing trend, resource allocation that both assures QoS via Service Level Agreement (SLA and avoids overprovisioning in order to reduce cost becomes a crucial priority and challenge in the design and operation of complex service-based platforms such as streaming service. On the other hand, providers of IaaS also concern their profit performance and energy consumption while offering these virtualized resources. In this paper, considering both service-oriented and infrastructure-oriented criteria, we regard this resource allocation problem as Multicriteria Decision Making problem and propose an effective trade-off approach based on goal programming model. To validate its effectiveness, a cloud architecture for streaming application is addressed and extensive analysis is performed for related criteria. The results of numerical simulations show that the proposed approach strikes a balance between these conflicting criteria commendably and achieves high cost efficiency.

  6. Use of computer programs STLK1 and STWT1 for analysis of stream-aquifer hydraulic interaction

    Science.gov (United States)

    Desimone, Leslie A.; Barlow, Paul M.

    1999-01-01

    Quantifying the hydraulic interaction of aquifers and streams is important in the analysis of stream base fow, flood-wave effects, and contaminant transport between surface- and ground-water systems. This report describes the use of two computer programs, STLK1 and STWT1, to analyze the hydraulic interaction of streams with confined, leaky, and water-table aquifers during periods of stream-stage fuctuations and uniform, areal recharge. The computer programs are based on analytical solutions to the ground-water-flow equation in stream-aquifer settings and calculate ground-water levels, seepage rates across the stream-aquifer boundary, and bank storage that result from arbitrarily varying stream stage or recharge. Analysis of idealized, hypothetical stream-aquifer systems is used to show how aquifer type, aquifer boundaries, and aquifer and streambank hydraulic properties affect aquifer response to stresses. Published data from alluvial and stratifed-drift aquifers in Kentucky, Massachusetts, and Iowa are used to demonstrate application of the programs to field settings. Analytical models of these three stream-aquifer systems are developed on the basis of available hydrogeologic information. Stream-stage fluctuations and recharge are applied to the systems as hydraulic stresses. The models are calibrated by matching ground-water levels calculated with computer program STLK1 or STWT1 to measured ground-water levels. The analytical models are used to estimate hydraulic properties of the aquifer, aquitard, and streambank; to evaluate hydrologic conditions in the aquifer; and to estimate seepage rates and bank-storage volumes resulting from flood waves and recharge. Analysis of field examples demonstrates the accuracy and limitations of the analytical solutions and programs when applied to actual ground-water systems and the potential uses of the analytical methods as alternatives to numerical modeling for quantifying stream-aquifer interactions.

  7. Terahertz computed tomography in three-dimensional using a pyroelectric array detector

    Science.gov (United States)

    Li, Bin; Wang, Dayong; Zhou, Xun; Rong, Lu; Huang, Haochong; Wan, Min; Wang, Yunxin

    2017-05-01

    Terahertz frequency range spans from 0.1 to 10 THz. Terahertz radiation can penetrate nonpolar materials and nonmetallic materials, such as plastics, wood, and clothes. Then the feature makes the terahertz imaging have important research value. Terahertz computed tomography makes use of the penetrability of terahertz radiation and obtains three-dimensional object projection data. In the paper, continuous-wave terahertz computed tomography with a pyroelectric array detectoris presented. Compared with scanning terahertz computed tomography, a pyroelectric array detector can obtain a large number of projection data in a short time, as the acquisition mode of the array pyroelectric detector omit the projection process on the vertical and horizontal direction. With the two-dimensional cross-sectional images of the object are obtained by the filtered back projection algorithm. The two side distance of the straw wall account for 80 pixels, so it multiplied by the pixel size is equal to the diameter of the straw about 6.4 mm. Compared with the actual diameter of the straw, the relative error is 6%. In order to reconstruct the three-dimensional internal structure image of the straw, the y direction range from 70 to 150 are selected on the array pyroelectric detector and are reconstructed by the filtered back projection algorithm. As the pixel size is 80 μm, the height of three-dimensional internal structure image of the straw is 6.48 mm. The presented system can rapidly reconstruct the three-dimensional object by using a pyroelectric array detector and explores the feasibility of on non-destructive evaluation and security testing.

  8. FORTRAN computer programs to process Savannah River Laboratory hydrogeochemical and stream-sediment reconnaissance data

    International Nuclear Information System (INIS)

    Zinkl, R.J.; Shettel, D.L. Jr.; D'Andrea, R.F. Jr.

    1980-03-01

    FORTRAN computer programs have been written to read, edit, and reformat the hydrogeochemical and stream-sediment reconnaissance data produced by Savannah River Laboratory for the National Uranium Resource Evaluation program. The data are presorted by Savannah River Laboratory into stream sediment, ground water, and stream water for each 1 0 x 2 0 quadrangle. Extraneous information is eliminated, and missing analyses are assigned a specific value (-99999.0). Negative analyses are below the detection limit; the absolute value of a negative analysis is assumed to be the detection limit

  9. Identifying the impact of G-quadruplexes on Affymetrix 3' arrays using cloud computing.

    Science.gov (United States)

    Memon, Farhat N; Owen, Anne M; Sanchez-Graillet, Olivia; Upton, Graham J G; Harrison, Andrew P

    2010-01-15

    A tetramer quadruplex structure is formed by four parallel strands of DNA/ RNA containing runs of guanine. These quadruplexes are able to form because guanine can Hoogsteen hydrogen bond to other guanines, and a tetrad of guanines can form a stable arrangement. Recently we have discovered that probes on Affymetrix GeneChips that contain runs of guanine do not measure gene expression reliably. We associate this finding with the likelihood that quadruplexes are forming on the surface of GeneChips. In order to cope with the rapidly expanding size of GeneChip array datasets in the public domain, we are exploring the use of cloud computing to replicate our experiments on 3' arrays to look at the effect of the location of G-spots (runs of guanines). Cloud computing is a recently introduced high-performance solution that takes advantage of the computational infrastructure of large organisations such as Amazon and Google. We expect that cloud computing will become widely adopted because it enables bioinformaticians to avoid capital expenditure on expensive computing resources and to only pay a cloud computing provider for what is used. Moreover, as well as financial efficiency, cloud computing is an ecologically-friendly technology, it enables efficient data-sharing and we expect it to be faster for development purposes. Here we propose the advantageous use of cloud computing to perform a large data-mining analysis of public domain 3' arrays.

  10. Green computing: efficient energy management of multiprocessor streaming applications via model checking

    NARCIS (Netherlands)

    Ahmad, W.

    2017-01-01

    Streaming applications such as virtual reality, video conferencing, and face detection, impose high demands on a system’s performance and battery life. With the advancement in mobile computing, these applications are increasingly implemented on battery-constrained platforms, such as gaming consoles,

  11. SIGMA, a new language for interactive array-oriented computing

    International Nuclear Information System (INIS)

    Hagedorn, R.; Reinfelds, J.; Vandoni, C.; Hove, L. van.

    1978-01-01

    A description is given of the principles and the main facilities of SIGMA (System for Interactive Graphical Mathematical Applications), a programming language for scientific computing whose major characteristics are: automatic handling of multi-dimensional rectangular arrays as basic data units, interactive operation of the system, and graphical display facilities. After introducing the basic concepts and features of the language, it describes in some detail the methods and operators for the automatic handling of arrays and for their graphical display, the procedures for construction of programs by users, and other facilities of the system. The report is a new version of CERN 73-5. (Auth.)

  12. Pilot-Streaming: Design Considerations for a Stream Processing Framework for High-Performance Computing

    OpenAIRE

    Andre Luckow; Peter Kasson; Shantenu Jha

    2016-01-01

    This White Paper (submitted to STREAM 2016) identifies an approach to integrate streaming data with HPC resources. The paper outlines the design of Pilot-Streaming, which extends the concept of Pilot-abstraction to streaming real-time data.

  13. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  14. Stream computing for biomedical signal processing: A QRS complex detection case-study.

    Science.gov (United States)

    Murphy, B M; O'Driscoll, C; Boylan, G B; Lightbody, G; Marnane, W P

    2015-01-01

    Recent developments in "Big Data" have brought significant gains in the ability to process large amounts of data on commodity server hardware. Stream computing is a relatively new paradigm in this area, addressing the need to process data in real time with very low latency. While this approach has been developed for dealing with large scale data from the world of business, security and finance, there is a natural overlap with clinical needs for physiological signal processing. In this work we present a case study of streams processing applied to a typical physiological signal processing problem: QRS detection from ECG data.

  15. STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Geoffrey [Indiana Univ., Bloomington, IN (United States); Jha, Shantenu [Rutgers Univ., New Brunswick, NJ (United States); Ramakrishnan, Lavanya [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-10-01

    The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), were conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report

  16. Fast algorithm for automatically computing Strahler stream order

    Science.gov (United States)

    Lanfear, Kenneth J.

    1990-01-01

    An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.

  17. On the Organization of Parallel Operation of Some Algorithms for Finding the Shortest Path on a Graph on a Computer System with Multiple Instruction Stream and Single Data Stream

    Directory of Open Access Journals (Sweden)

    V. E. Podol'skii

    2015-01-01

    Full Text Available The paper considers the implementing Bellman-Ford and Lee algorithms to find the shortest graph path on a computer system with multiple instruction stream and single data stream (MISD. The MISD computer is a computer that executes commands of arithmetic-logical processing (on the CPU and commands of structures processing (on the structures processor in parallel on a single data stream. Transformation of sequential programs into the MISD programs is a labor intensity process because it requires a stream of the arithmetic-logical processing to be manually separated from that of the structures processing. Algorithms based on the processing of data structures (e.g., algorithms on graphs show high performance on a MISD computer. Bellman-Ford and Lee algorithms for finding the shortest path on a graph are representatives of these algorithms. They are applied to robotics for automatic planning of the robot movement in-situ. Modification of Bellman-Ford and Lee algorithms for finding the shortest graph path in coprocessor MISD mode and the parallel MISD modification of these algorithms were first obtained in this article. Thus, this article continues a series of studies on the transformation of sequential algorithms into MISD ones (Dijkstra and Ford-Fulkerson 's algorithms and has a pronouncedly applied nature. The article also presents the analysis results of Bellman-Ford and Lee algorithms in MISD mode. The paper formulates the basic trends of a technique for parallelization of algorithms into arithmetic-logical processing stream and structures processing stream. Among the key areas for future research, development of the mathematical approach to provide a subsequently formalized and automated process of parallelizing sequential algorithms between the CPU and structures processor is highlighted. Among the mathematical models that can be used in future studies there are graph models of algorithms (e.g., dependency graph of a program. Due to the high

  18. Trends in Computer-Aided Manufacturing in Prosthodontics: A Review of the Available Streams

    Science.gov (United States)

    Bennamoun, Mohammed

    2014-01-01

    In prosthodontics, conventional methods of fabrication of oral and facial prostheses have been considered the gold standard for many years. The development of computer-aided manufacturing and the medical application of this industrial technology have provided an alternative way of fabricating oral and facial prostheses. This narrative review aims to evaluate the different streams of computer-aided manufacturing in prosthodontics. To date, there are two streams: the subtractive and the additive approaches. The differences reside in the processing protocols, materials used, and their respective accuracy. In general, there is a tendency for the subtractive method to provide more homogeneous objects with acceptable accuracy that may be more suitable for the production of intraoral prostheses where high occlusal forces are anticipated. Additive manufacturing methods have the ability to produce large workpieces with significant surface variation and competitive accuracy. Such advantages make them ideal for the fabrication of facial prostheses. PMID:24817888

  19. Trends in Computer-Aided Manufacturing in Prosthodontics: A Review of the Available Streams

    Directory of Open Access Journals (Sweden)

    Jaafar Abduo

    2014-01-01

    Full Text Available In prosthodontics, conventional methods of fabrication of oral and facial prostheses have been considered the gold standard for many years. The development of computer-aided manufacturing and the medical application of this industrial technology have provided an alternative way of fabricating oral and facial prostheses. This narrative review aims to evaluate the different streams of computer-aided manufacturing in prosthodontics. To date, there are two streams: the subtractive and the additive approaches. The differences reside in the processing protocols, materials used, and their respective accuracy. In general, there is a tendency for the subtractive method to provide more homogeneous objects with acceptable accuracy that may be more suitable for the production of intraoral prostheses where high occlusal forces are anticipated. Additive manufacturing methods have the ability to produce large workpieces with significant surface variation and competitive accuracy. Such advantages make them ideal for the fabrication of facial prostheses.

  20. An end-to-end computing model for the Square Kilometre Array

    NARCIS (Netherlands)

    Jongerius, R.; Wijnholds, S.; Nijboer, R.; Corporaal, H.

    2014-01-01

    For next-generation radio telescopes such as the Square Kilometre Array, seemingly minor changes in scientific constraints can easily push computing requirements into the exascale domain. The authors propose a model for engineers and astronomers to understand these relations and make tradeoffs in

  1. Prospects for quantum computing with an array of ultracold polar paramagnetic molecules.

    Science.gov (United States)

    Karra, Mallikarjun; Sharma, Ketan; Friedrich, Bretislav; Kais, Sabre; Herschbach, Dudley

    2016-03-07

    Arrays of trapped ultracold molecules represent a promising platform for implementing a universal quantum computer. DeMille [Phys. Rev. Lett. 88, 067901 (2002)] has detailed a prototype design based on Stark states of polar (1)Σ molecules as qubits. Herein, we consider an array of polar (2)Σ molecules which are, in addition, inherently paramagnetic and whose Hund's case (b) free-rotor pair-eigenstates are Bell states. We show that by subjecting the array to combinations of concurrent homogeneous and inhomogeneous electric and magnetic fields, the entanglement of the array's Stark and Zeeman states can be tuned and the qubit sites addressed. Two schemes for implementing an optically controlled CNOT gate are proposed and their feasibility discussed in the face of the broadening of spectral lines due to dipole-dipole coupling and the inhomogeneity of the electric and magnetic fields.

  2. Programmable cellular arrays. Faults testing and correcting in cellular arrays

    International Nuclear Information System (INIS)

    Cercel, L.

    1978-03-01

    A review of some recent researches about programmable cellular arrays in computing and digital processing of information systems is presented, and includes both combinational and sequential arrays, with full arbitrary behaviour, or which can realize better implementations of specialized blocks as: arithmetic units, counters, comparators, control systems, memory blocks, etc. Also, the paper presents applications of cellular arrays in microprogramming, in implementing of a specialized computer for matrix operations, in modeling of universal computing systems. The last section deals with problems of fault testing and correcting in cellular arrays. (author)

  3. Numerical analysis of ALADIN optics contamination due to outgassing of solar array materials

    Energy Technology Data Exchange (ETDEWEB)

    Markelov, G [Advanced Operations and Engineering Services (AOES) Group BV, Postbus 342, 2300 AH Leiden (Netherlands); Endemann, M [ESA-ESTEC/EOP-PAS, Postbus 299, 2200 AG Noordwijk (Netherlands); Wernham, D [ESA-ESTEC/EOP-PAQ, Postbus 299, 2200 AG Noordwijk (Netherlands)], E-mail: Gennady.Markelov@aoes.com

    2008-03-01

    ALADIN is the very first space-based lidar that will provide global wind profile and a special attention has been paid to contamination of ALADIN optics. The paper presents a numerical approach, which is based on the direct simulation Monte Carlo method. The method allows one to accurately compute collisions between various species, in the case under consideration, free-stream flow and outgassing from solar array materials. The collisions create a contamination flux onto the optics despite there is no line-of-sight from the solar arrays to the optics. Comparison of obtained results with a simple analytical model prediction shows that the analytical model underpredicts mass fluxes.

  4. Numerical analysis of ALADIN optics contamination due to outgassing of solar array materials

    International Nuclear Information System (INIS)

    Markelov, G; Endemann, M; Wernham, D

    2008-01-01

    ALADIN is the very first space-based lidar that will provide global wind profile and a special attention has been paid to contamination of ALADIN optics. The paper presents a numerical approach, which is based on the direct simulation Monte Carlo method. The method allows one to accurately compute collisions between various species, in the case under consideration, free-stream flow and outgassing from solar array materials. The collisions create a contamination flux onto the optics despite there is no line-of-sight from the solar arrays to the optics. Comparison of obtained results with a simple analytical model prediction shows that the analytical model underpredicts mass fluxes

  5. Cluster Computing For Real Time Seismic Array Analysis.

    Science.gov (United States)

    Martini, M.; Giudicepietro, F.

    A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by

  6. PREVENTIVE SIGNATURE MODEL FOR SECURE CLOUD DEPLOYMENT THROUGH FUZZY DATA ARRAY COMPUTATION

    Directory of Open Access Journals (Sweden)

    R. Poorvadevi

    2017-01-01

    Full Text Available Cloud computing is a resource pool which offers boundless services by the form of resources to its end users whoever heavily depends on cloud service providers. Cloud is providing the service access across the geographic locations in an efficient way. However it is offering numerous services, client end system is not having adequate methods, security policies and other protocols for using the cloud customer secret level transactions and other privacy related information. So, this proposed model brings the solution for securing the cloud user confidential data, Application deployment and also identifying the genuineness of the user by applying the scheme which is referred as fuzzy data array computation. Fuzzy data array computation provides an effective system is called signature retrieval and evaluation system through which customer’s data can be safeguarded along with their application. This signature system can be implemented on the cloud environment using the cloud sim 3.0 simulator tools. It facilitates the security operation over the data centre and cloud vendor locations in an effective manner.

  7. Symbol Stream Combining in a Convolutionally Coded System

    Science.gov (United States)

    Mceliece, R. J.; Pollara, F.; Swanson, L.

    1985-01-01

    Symbol stream combining has been proposed as a method for arraying signals received at different antennas. If convolutional coding and Viterbi decoding are used, it is shown that a Viterbi decoder based on the proposed weighted sum of symbol streams yields maximum likelihood decisions.

  8. Prototype of a production system for Cherenkov Telescope Array with DIRAC

    CERN Document Server

    Arrabito, L; Haupt, A; Graciani Diaz, R; Stagni, F; Tsaregorodtsev, A

    2015-01-01

    The Cherenkov Telescope Array (CTA) — an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale — is the next generation instrument in the field of very high energy gamma-ray astronomy. CTA will operate as an open observatory providing data products to the scientific community. An average data stream of about 10 GB/s for about 1000 hours of observation per year, thus producing several PB/year, is expected. Large CPU time is required for data-processing as well for massive Monte Carlo simulations needed for detector calibration purposes. The current CTA computing model is based on a distributed infrastructure for the archive and the data off-line processing. In order to manage the off-line data-processing in a distributed environment, CTA has evaluated the DIRAC (Distributed Infrastructure with Remote Agent Control) system, which is a general framework for the management of tasks over distributed heterogeneous computing environments. In particular, a production sy...

  9. Efficient Buffer Capacity and Scheduler Setting Computation for Soft Real-Time Stream Processing Applications

    NARCIS (Netherlands)

    Bekooij, Marco; Bekooij, Marco Jan Gerrit; Wiggers, M.H.; van Meerbergen, Jef

    2007-01-01

    Soft real-time applications that process data streams can often be intuitively described as dataflow process networks. In this paper we present a novel analysis technique to compute conservative estimates of the required buffer capacities in such process networks. With the same analysis technique

  10. Computationally Efficient 2D DOA Estimation for L-Shaped Array with Unknown Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Yang-Yang Dong

    2018-01-01

    Full Text Available Although L-shaped array can provide good angle estimation performance and is easy to implement, its two-dimensional (2D direction-of-arrival (DOA performance degrades greatly in the presence of mutual coupling. To deal with the mutual coupling effect, a novel 2D DOA estimation method for L-shaped array with low computational complexity is developed in this paper. First, we generalize the conventional mutual coupling model for L-shaped array and compensate the mutual coupling blindly via sacrificing a few sensors as auxiliary elements. Then we apply the propagator method twice to mitigate the effect of strong source signal correlation effect. Finally, the estimations of azimuth and elevation angles are achieved simultaneously without pair matching via the complex eigenvalue technique. Compared with the existing methods, the proposed method is computationally efficient without spectrum search or polynomial rooting and also has fine angle estimation performance for highly correlated source signals. Theoretical analysis and simulation results have demonstrated the effectiveness of the proposed method.

  11. RStorm: Developing and Testing Streaming Algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  12. RStorm : Developing and testing streaming algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  13. Omniscopes: Large area telescope arrays with only NlogN computational cost

    International Nuclear Information System (INIS)

    Tegmark, Max; Zaldarriaga, Matias

    2010-01-01

    We show that the class of antenna layouts for telescope arrays allowing cheap analysis hardware (with correlator cost scaling as NlogN rather than N 2 with the number of antennas N) is encouragingly large, including not only previously discussed rectangular grids but also arbitrary hierarchies of such grids, with arbitrary rotations and shears at each level. We show that all correlations for such a 2D array with an n-level hierarchy can be efficiently computed via a fast Fourier transform in not two but 2n dimensions. This can allow major correlator cost reductions for science applications requiring exquisite sensitivity at widely separated angular scales, for example, 21 cm tomography (where short baselines are needed to probe the cosmological signal and long baselines are needed for point source removal), helping enable future 21 cm experiments with thousands or millions of cheap dipolelike antennas. Such hierarchical grids combine the angular resolution advantage of traditional array layouts with the cost advantage of a rectangular fast Fourier transform telescope. We also describe an algorithm for how a subclass of hierarchical arrays can efficiently use rotation synthesis to produce global sky maps with minimal noise and a well-characterized synthesized beam.

  14. Real-time change detection in data streams with FPGAs

    International Nuclear Information System (INIS)

    Vega, J.; Dormido-Canto, S.; Cruz, T.; Ruiz, M.; Barrera, E.; Castro, R.; Murari, A.; Ochando, M.

    2014-01-01

    Highlights: • Automatic recognition of changes in data streams of multidimensional signals. • Detection algorithm based on testing exchangeability on-line. • Real-time and off-line applicability. • Real-time implementation in FPGAs. - Abstract: The automatic recognition of changes in data streams is useful in both real-time and off-line data analyses. This article shows several effective change-detecting algorithms (based on martingales) and describes their real-time applicability in the data acquisition systems through the use of Field Programmable Gate Arrays (FPGA). The automatic event recognition system is absolutely general and it does not depend on either the particular event to detect or the specific data representation (waveforms, images or multidimensional signals). The developed approach provides good results for change detection in both the temporal evolution of profiles and the two-dimensional spatial distribution of volume emission intensity. The average computation time in the FPGA is 210 μs per profile

  15. Autonomic intrusion detection: Adaptively detecting anomalies over unlabeled audit data streams in computer networks

    KAUST Repository

    Wang, Wei; Guyet, Thomas; Quiniou, René ; Cordier, Marie-Odile; Masseglia, Florent; Zhang, Xiangliang

    2014-01-01

    In this work, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-managing: self-labeling, self-updating and self-adapting. Our framework employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies. Two large real HTTP traffic streams collected in our institute as well as a set of benchmark KDD’99 data are used to validate the framework and the method. The test results show that the autonomic model achieves better results in terms of effectiveness and efficiency compared to adaptive Sequential Karhunen–Loeve method and static AP as well as three other static anomaly detection methods, namely, k-NN, PCA and SVM.

  16. Autonomic intrusion detection: Adaptively detecting anomalies over unlabeled audit data streams in computer networks

    KAUST Repository

    Wang, Wei

    2014-06-22

    In this work, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-managing: self-labeling, self-updating and self-adapting. Our framework employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies. Two large real HTTP traffic streams collected in our institute as well as a set of benchmark KDD’99 data are used to validate the framework and the method. The test results show that the autonomic model achieves better results in terms of effectiveness and efficiency compared to adaptive Sequential Karhunen–Loeve method and static AP as well as three other static anomaly detection methods, namely, k-NN, PCA and SVM.

  17. Stream function method for computing steady rotational transonic flows with application to solar wind-type problems

    International Nuclear Information System (INIS)

    Kopriva, D.A.

    1982-01-01

    A numerical scheme has been developed to solve the quasilinear form of the transonic stream function equation. The method is applied to compute steady two-dimensional axisymmetric solar wind-type problems. A single, perfect, non-dissipative, homentropic and polytropic gas-dynamics is assumed. The four equations governing mass and momentum conservation are reduced to a single nonlinear second order partial differential equation for the stream function. Bernoulli's equation is used to obtain a nonlinear algebraic relation for the density in terms of stream function derivatives. The vorticity includes the effects of azimuthal rotation and Bernoulli's function and is determined from quantities specified on boundaries. The approach is efficient. The number of equations and independent variables has been reduced and a rapid relaxation technique developed for the transonic full potential equation is used. Second order accurate central differences are used in elliptic regions. In hyperbolic regions a dissipation term motivated by the rotated differencing scheme of Jameson is added for stability. A successive-line-overrelaxation technique also introduced by Jameson is used to solve the equations. The nonlinear equation for the density is a double valued function of the stream function derivatives. The velocities are extrapolated from upwind points to determine the proper branch and Newton's method is used to iteratively compute the density. This allows accurate solutions with few grid points

  18. Analysis of hydraulic characteristics for stream diversion in small stream

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang-Jin; Jun, Kye-Won [Chungbuk National University, Cheongju(Korea)

    2001-10-31

    This study is the analysis of hydraulic characteristics for stream diversion reach by numerical model test. Through it we can provide the basis data in flood, and in grasping stream flow characteristics. Analysis of hydraulic characteristics in Seoknam stream were implemented by using computer model HEC-RAS(one-dimensional model) and RMA2(two-dimensional finite element model). As a result we became to know that RMA2 to simulate left, main channel, right in stream is more effective method in analysing flow in channel bends, steep slope, complex bed form effect stream flow characteristics, than HEC-RAS. (author). 13 refs., 3 tabs., 5 figs.

  19. StreamStats in Oklahoma - Drainage-Basin Characteristics and Peak-Flow Frequency Statistics for Ungaged Streams

    Science.gov (United States)

    Smith, S. Jerrod; Esralew, Rachel A.

    2010-01-01

    The USGS Streamflow Statistics (StreamStats) Program was created to make geographic information systems-based estimation of streamflow statistics easier, faster, and more consistent than previously used manual techniques. The StreamStats user interface is a map-based internet application that allows users to easily obtain streamflow statistics, basin characteristics, and other information for user-selected U.S. Geological Survey data-collection stations and ungaged sites of interest. The application relies on the data collected at U.S. Geological Survey streamflow-gaging stations, computer aided computations of drainage-basin characteristics, and published regression equations for several geographic regions comprising the United States. The StreamStats application interface allows the user to (1) obtain information on features in selected map layers, (2) delineate drainage basins for ungaged sites, (3) download drainage-basin polygons to a shapefile, (4) compute selected basin characteristics for delineated drainage basins, (5) estimate selected streamflow statistics for ungaged points on a stream, (6) print map views, (7) retrieve information for U.S. Geological Survey streamflow-gaging stations, and (8) get help on using StreamStats. StreamStats was designed for national application, with each state, territory, or group of states responsible for creating unique geospatial datasets and regression equations to compute selected streamflow statistics. With the cooperation of the Oklahoma Department of Transportation, StreamStats has been implemented for Oklahoma and is available at http://water.usgs.gov/osw/streamstats/. The Oklahoma StreamStats application covers 69 processed hydrologic units and most of the state of Oklahoma. Basin characteristics available for computation include contributing drainage area, contributing drainage area that is unregulated by Natural Resources Conservation Service floodwater retarding structures, mean-annual precipitation at the

  20. Dynamical modeling of tidal streams

    International Nuclear Information System (INIS)

    Bovy, Jo

    2014-01-01

    I present a new framework for modeling the dynamics of tidal streams. The framework consists of simple models for the initial action-angle distribution of tidal debris, which can be straightforwardly evolved forward in time. Taking advantage of the essentially one-dimensional nature of tidal streams, the transformation to position-velocity coordinates can be linearized and interpolated near a small number of points along the stream, thus allowing for efficient computations of a stream's properties in observable quantities. I illustrate how to calculate the stream's average location (its 'track') in different coordinate systems, how to quickly estimate the dispersion around its track, and how to draw mock stream data. As a generative model, this framework allows one to compute the full probability distribution function and marginalize over or condition it on certain phase-space dimensions as well as convolve it with observational uncertainties. This will be instrumental in proper data analysis of stream data. In addition to providing a computationally efficient practical tool for modeling the dynamics of tidal streams, the action-angle nature of the framework helps elucidate how the observed width of the stream relates to the velocity dispersion or mass of the progenitor, and how the progenitors of 'orphan' streams could be located. The practical usefulness of the proposed framework crucially depends on the ability to calculate action-angle variables for any orbit in any gravitational potential. A novel method for calculating actions, frequencies, and angles in any static potential using a single orbit integration is described in the Appendix.

  1. Stream Deniable-Encryption Algorithms

    Directory of Open Access Journals (Sweden)

    N.A. Moldovyan

    2016-04-01

    Full Text Available A method for stream deniable encryption of secret message is proposed, which is computationally indistinguishable from the probabilistic encryption of some fake message. The method uses generation of two key streams with some secure block cipher. One of the key streams is generated depending on the secret key and the other one is generated depending on the fake key. The key streams are mixed with the secret and fake data streams so that the output ciphertext looks like the ciphertext produced by some probabilistic encryption algorithm applied to the fake message, while using the fake key. When the receiver or/and sender of the ciphertext are coerced to open the encryption key and the source message, they open the fake key and the fake message. To disclose their lie the coercer should demonstrate possibility of the alternative decryption of the ciphertext, however this is a computationally hard problem.

  2. Data streams: algorithms and applications

    National Research Council Canada - National Science Library

    Muthukrishnan, S

    2005-01-01

    ... massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [175]. S. Muthukrishnan Rutgers University, New Brunswick, NJ, USA, muthu@cs...

  3. A Statistical Model and Computer program for Preliminary Calculations Related to the Scaling of Sensor Arrays; TOPICAL

    International Nuclear Information System (INIS)

    Max Morris

    2001-01-01

    Recent advances in sensor technology and engineering have made it possible to assemble many related sensors in a common array, often of small physical size. Sensor arrays may report an entire vector of measured values in each data collection cycle, typically one value per sensor per sampling time. The larger quantities of data provided by larger arrays certainly contain more information, however in some cases experience suggests that dramatic increases in array size do not always lead to corresponding improvements in the practical value of the data. The work leading to this report was motivated by the need to develop computational planning tools to approximate the relative effectiveness of arrays of different size (or scale) in a wide variety of contexts. The basis of the work is a statistical model of a generic sensor array. It includes features representing measurement error, both common to all sensors and independent from sensor to sensor, and the stochastic relationships between the quantities to be measured by the sensors. The model can be used to assess the effectiveness of hypothetical arrays in classifying objects or events from two classes. A computer program is presented for evaluating the misclassification rates which can be expected when arrays are calibrated using a given number of training samples, or the number of training samples required to attain a given level of classification accuracy. The program is also available via email from the first author for a limited time

  4. Reduced-Complexity Direction of Arrival Estimation Using Real-Valued Computation with Arbitrary Array Configurations

    Directory of Open Access Journals (Sweden)

    Feng-Gang Yan

    2018-01-01

    Full Text Available A low-complexity algorithm is presented to dramatically reduce the complexity of the multiple signal classification (MUSIC algorithm for direction of arrival (DOA estimation, in which both tasks of eigenvalue decomposition (EVD and spectral search are implemented with efficient real-valued computations, leading to about 75% complexity reduction as compared to the standard MUSIC. Furthermore, the proposed technique has no dependence on array configurations and is hence suitable for arbitrary array geometries, which shows a significant implementation advantage over most state-of-the-art unitary estimators including unitary MUSIC (U-MUSIC. Numerical simulations over a wide range of scenarios are conducted to show the performance of the new technique, which demonstrates that with a significantly reduced computational complexity, the new approach is able to provide a close accuracy to the standard MUSIC.

  5. Nanoscale phosphorus atom arrays created using STM for the fabrication of a silicon based quantum computer.

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, J. L. (Jeremy L.); Schofield, S. R. (Steven R.); Simmons, M. Y. (Michelle Y.); Clark, R. G. (Robert G.); Dzurak, A. S. (Andrew S.); Curson, N. J. (Neil J.); Kane, B. E. (Bruce E.); McAlpine, N. S. (Neal S.); Hawley, M. E. (Marilyn E.); Brown, G. W. (Geoffrey W.)

    2001-01-01

    Quantum computers offer the promise of formidable computational power for certain tasks. Of the various possible physical implementations of such a device, silicon based architectures are attractive for their scalability and ease of integration with existing silicon technology. These designs use either the electron or nuclear spin state of single donor atoms to store quantum information. Here we describe a strategy to fabricate an array of single phosphorus atoms in silicon for the construction of such a silicon based quantum computer. We demonstrate the controlled placement of single phosphorus bearing molecules on a silicon surface. This has been achieved by patterning a hydrogen mono-layer 'resist' with a scanning tunneling microscope (STM) tip and exposing the patterned surface to phosphine (PH3) molecules. We also describe preliminary studies into a process to incorporate these surface phosphorus atoms into the silicon crystal at the array sites. Keywords: Quantum computing, nanotechriology scanning turincling microscopy, hydrogen lithography

  6. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy

    Science.gov (United States)

    Kim, Joshua; Lu, Weiguo; Zhang, Tiezhi

    2014-02-01

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source-dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10-15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source-dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented.

  7. Comparison of Computational and Experimental Microphone Array Results for an 18%-Scale Aircraft Model

    Science.gov (United States)

    Lockard, David P.; Humphreys, William M.; Khorrami, Mehdi R.; Fares, Ehab; Casalino, Damiano; Ravetta, Patricio A.

    2015-01-01

    An 18%-scale, semi-span model is used as a platform for examining the efficacy of microphone array processing using synthetic data from numerical simulations. Two hybrid RANS/LES codes coupled with Ffowcs Williams-Hawkings solvers are used to calculate 97 microphone signals at the locations of an array employed in the NASA LaRC 14x22 tunnel. Conventional, DAMAS, and CLEAN-SC array processing is applied in an identical fashion to the experimental and computational results for three different configurations involving deploying and retracting the main landing gear and a part span flap. Despite the short time records of the numerical signals, the beamform maps are able to isolate the noise sources, and the appearance of the DAMAS synthetic array maps is generally better than those from the experimental data. The experimental CLEAN-SC maps are similar in quality to those from the simulations indicating that CLEAN-SC may have less sensitivity to background noise. The spectrum obtained from DAMAS processing of synthetic array data is nearly identical to the spectrum of the center microphone of the array, indicating that for this problem array processing of synthetic data does not improve spectral comparisons with experiment. However, the beamform maps do provide an additional means of comparison that can reveal differences that cannot be ascertained from spectra alone.

  8. Implementation and Evaluation of the Streamflow Statistics (StreamStats) Web Application for Computing Basin Characteristics and Flood Peaks in Illinois

    Science.gov (United States)

    Ishii, Audrey L.; Soong, David T.; Sharpe, Jennifer B.

    2010-01-01

    Illinois StreamStats (ILSS) is a Web-based application for computing selected basin characteristics and flood-peak quantiles based on the most recently (2010) published (Soong and others, 2004) regional flood-frequency equations at any rural stream location in Illinois. Limited streamflow statistics including general statistics, flow durations, and base flows also are available for U.S. Geological Survey (USGS) streamflow-gaging stations. ILSS can be accessed on the Web at http://streamstats.usgs.gov/ by selecting the State Applications hyperlink and choosing Illinois from the pull-down menu. ILSS was implemented for Illinois by obtaining and projecting ancillary geographic information system (GIS) coverages; populating the StreamStats database with streamflow-gaging station data; hydroprocessing the 30-meter digital elevation model (DEM) for Illinois to conform to streams represented in the National Hydrographic Dataset 1:100,000 stream coverage; and customizing the Web-based Extensible Markup Language (XML) programs for computing basin characteristics for Illinois. The basin characteristics computed by ILSS then were compared to the basin characteristics used in the published study, and adjustments were applied to the XML algorithms for slope and basin length. Testing of ILSS was accomplished by comparing flood quantiles computed by ILSS at a an approximately random sample of 170 streamflow-gaging stations computed by ILSS with the published flood quantile estimates. Differences between the log-transformed flood quantiles were not statistically significant at the 95-percent confidence level for the State as a whole, nor by the regions determined by each equation, except for region 1, in the northwest corner of the State. In region 1, the average difference in flood quantile estimates ranged from 3.76 percent for the 2-year flood quantile to 4.27 percent for the 500-year flood quantile. The total number of stations in region 1 was small (21) and the mean

  9. Reconfigurable Multicore Architectures for Streaming Applications

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Kokkeler, Andre B.J.; Rauwerda, G.K.; Jacobs, J.W.M.; Nicolescu, G.; Mosterman, P.J.

    2009-01-01

    This chapter addresses reconfigurable heterogenous and homogeneous multicore system-on-chip (SoC) platforms for streaming digital signal processing applications, also called DSP applications. In streaming DSP applications, computations can be specified as a data flow graph with streams of data items

  10. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy

    International Nuclear Information System (INIS)

    Kim, Joshua; Zhang, Tiezhi; Lu, Weiguo

    2014-01-01

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source–dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10–15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source–dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented. (paper)

  11. On a computational study for investigating acoustic streaming and heating during focused ultrasound ablation of liver tumor

    International Nuclear Information System (INIS)

    Solovchuk, Maxim A.; Sheu, Tony W.H.; Thiriet, Marc; Lin, Win-Li

    2013-01-01

    The influences of blood vessels and focused location on temperature distribution during high-intensity focused ultrasound (HIFU) ablation of liver tumors are studied numerically. A three-dimensional acoustics-thermal-fluid coupling model is employed to compute the temperature field in the hepatic cancerous region. The model construction is based on the linear Westervelt and bioheat equations as well as the nonlinear Navier–Stokes equations for the liver parenchyma and blood vessels. The effect of acoustic streaming is also taken into account in the present HIFU simulation study. Different blood vessel diameters and focal point locations were investigated. We found from this three-dimensional numerical study that in large blood vessels both the convective cooling and acoustic streaming can considerably change the temperature field and the thermal lesion near blood vessels. If the blood vessel is located within the beam width, both acoustic streaming and blood flow cooling effects should be addressed. The temperature rise on the blood vessel wall generated by a 1.0 MHz focused ultrasound transducer with the focal intensity 327 W/cm 2 was 54% lower when acoustic streaming effect was taken into account. Subject to the applied acoustic power the streaming velocity in a 3 mm blood vessel is 12 cm/s. Thirty percent of the necrosed volume can be reduced, when taking into account the acoustic streaming effect. -- Highlights: • 3D three-field coupling physical model for focused ultrasound tumor ablation is presented. • Acoustic streaming and blood flow cooling effects on ultrasound heating are investigated. • Acoustic streaming can considerably affect the temperature distribution. • The lesion can be reduced by 30% due to the acoustic streaming effect. • Temperature on the blood vessel wall is reduced by 54% due to the acoustic streaming effect

  12. The Square Kilometre Array Science Data Processor. Preliminary compute platform design

    International Nuclear Information System (INIS)

    Broekema, P.C.; Nieuwpoort, R.V. van; Bal, H.E.

    2015-01-01

    The Square Kilometre Array is a next-generation radio-telescope, to be built in South Africa and Western Australia. It is currently in its detailed design phase, with procurement and construction scheduled to start in 2017. The SKA Science Data Processor is the high-performance computing element of the instrument, responsible for producing science-ready data. This is a major IT project, with the Science Data Processor expected to challenge the computing state-of-the art even in 2020. In this paper we introduce the preliminary Science Data Processor design and the principles that guide the design process, as well as the constraints to the design. We introduce a highly scalable and flexible system architecture capable of handling the SDP workload

  13. Computational investigation of hydrokinetic turbine arrays in an open channel using an actuator disk-LES model

    Science.gov (United States)

    Kang, Seokkoo; Yang, Xiaolei; Sotiropoulos, Fotis

    2012-11-01

    While a considerable amount of work has focused on studying the effects and performance of wind farms, very little is known about the performance of hydrokinetic turbine arrays in open channels. Unlike large wind farms, where the vertical fluxes of momentum and energy from the atmospheric boundary layer comprise the main transport mechanisms, the presence of free surface in hydrokinetic turbine arrays inhibits vertical transport. To explore this fundamental difference between wind and hydrokinetic turbine arrays, we carry out LES with the actuator disk model to systematically investigate various layouts of hydrokinetic turbine arrays mounted on the bed of a straight open channel with fully-developed turbulent flow fed at the channel inlet. Mean flow quantities and turbulence statistics within and downstream of the arrays will be analyzed and the effect of the turbine arrays as means for increasing the effective roughness of the channel bed will be extensively discussed. This work was supported by Initiative for Renewable Energy & the Environment (IREE) (Grant No. RO-0004-12), and computational resources were provided by Minnesota Supercomputing Institute.

  14. Isotropic-resolution linear-array-based photoacoustic computed tomography through inverse Radon transform

    Science.gov (United States)

    Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.

    2015-03-01

    Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.

  15. Interactive collision detection for deformable models using streaming AABBs.

    Science.gov (United States)

    Zhang, Xinyu; Kim, Young J

    2007-01-01

    We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB

  16. A real time sorting algorithm to time sort any deterministic time disordered data stream

    Science.gov (United States)

    Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.

    2017-12-01

    In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.

  17. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    Science.gov (United States)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  18. Communication and control by listening: towards optimal design of a two-class auditory streaming brain-computer interface

    Directory of Open Access Journals (Sweden)

    N. Jeremy Hill

    2012-12-01

    Full Text Available Most brain-computer interface (BCI systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two dichotically presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously-published variants provides superior performance: a fixed-phase (FP design in which the streams have equal period and opposite phase, or a drifting-phase (DP design where the periods are unequal. We found FP to be superior to DP (p = 0.002: average performance levels were 80% and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one’s eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely

  19. Communication and control by listening: toward optimal design of a two-class auditory streaming brain-computer interface.

    Science.gov (United States)

    Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin

    2012-01-01

    Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.

  20. S/sub N/ computational benchmark solutions for slab geometry models of a gas-cooled fast reactor (GCFR) lattice cell

    International Nuclear Information System (INIS)

    McCoy, D.R.

    1981-01-01

    S/sub N/ computational benchmark solutions are generated for a onegroup and multigroup fuel-void slab lattice cell which is a rough model of a gas-cooled fast reactor (GCFR) lattice cell. The reactivity induced by the extrusion of the fuel material into the voided region is determined for a series of partially extruded lattice cell configurations. A special modified Gauss S/sub N/ ordinate array design is developed in order to obtain eigenvalues with errors less than 0.03% in all of the configurations that are considered. The modified Gauss S/sub N/ ordinate array design has a substantially improved eigenvalue angular convergence behavior when compared to existing S/sub N/ ordinate array designs used in neutron streaming applications. The angular refinement computations are performed in some cases by using a perturbation theory method which enables one to obtain high order S/sub N/ eigenvalue estimates for greatly reduced computational costs

  1. Seismic array processing and computational infrastructure for improved monitoring of Alaskan and Aleutian seismicity and volcanoes

    Science.gov (United States)

    Lindquist, Kent Gordon

    We constructed a near-real-time system, called Iceworm, to automate seismic data collection, processing, storage, and distribution at the Alaska Earthquake Information Center (AEIC). Phase-picking, phase association, and interprocess communication components come from Earthworm (U.S. Geological Survey). A new generic, internal format for digital data supports unified handling of data from diverse sources. A new infrastructure for applying processing algorithms to near-real-time data streams supports automated information extraction from seismic wavefields. Integration of Datascope (U. of Colorado) provides relational database management of all automated measurements, parametric information for located hypocenters, and waveform data from Iceworm. Data from 1997 yield 329 earthquakes located by both Iceworm and the AEIC. Of these, 203 have location residuals under 22 km, sufficient for hazard response. Regionalized inversions for local magnitude in Alaska yield Msb{L} calibration curves (logAsb0) that differ from the Californian Richter magnitude. The new curve is 0.2\\ Msb{L} units more attenuative than the Californian curve at 400 km for earthquakes north of the Denali fault. South of the fault, and for a region north of Cook Inlet, the difference is 0.4\\ Msb{L}. A curve for deep events differs by 0.6\\ Msb{L} at 650 km. We expand geographic coverage of Alaskan regional seismic monitoring to the Aleutians, the Bering Sea, and the entire Arctic by initiating the processing of four short-period, Alaskan seismic arrays. To show the array stations' sensitivity, we detect and locate two microearthquakes that were missed by the AEIC. An empirical study of the location sensitivity of the arrays predicts improvements over the Alaskan regional network that are shown as map-view contour plots. We verify these predictions by detecting an Msb{L} 3.2 event near Unimak Island with one array. The detection and location of four representative earthquakes illustrates the expansion

  2. Low-frequency synthesis array in earth orbit

    International Nuclear Information System (INIS)

    Jones, D.L.; Preston, R.A.; Kuiper, T.B.H.

    1987-01-01

    The scientific objectives and design concept of a space-based VLBI array for high-resolution astronomical observations at 1-30 MHz are discussed. The types of investigations calling for such an array include radio spectroscopy of individual objects, measurement of the effects of scattering and refraction by the interplanetary medium (IPM) and the ISM, mapping the distribution of low-energy cosmic-ray electrons, and determining the extent of the Galactic halo. Consideration is given to the limitations imposed on an LF VLBI array by the ionosphere, the IPM, and the ISM; the calibration advantages offered by circular polar orbits of slightly differing ascending-node longitude for the array satellites; and collection of the IF data streams from the array satellites by one master satellite prior to transmission to the ground. It is shown that determination of the three-dimensional array geometry by means of intersatellite radio links is feasible if there are at least seven spacecraft in the array

  3. streamgap-pepper: Effects of peppering streams with many small impacts

    Science.gov (United States)

    Bovy, Jo; Erkal, Denis; Sanders, Jason

    2017-02-01

    streamgap-pepper computes the effect of subhalo fly-bys on cold tidal streams based on the action-angle representation of streams. A line-of-parallel-angle approach is used to calculate the perturbed distribution function of a given stream segment by undoing the effect of all impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 10^5 Msun, accounting for the stream's internal dispersion and overlapping impacts. This code uses galpy (ascl:1411.008) and the streampepperdf.py galpy extension, which implements the fast calculation of the perturbed stream structure.

  4. Stream Lifetimes Against Planetary Encounters

    Science.gov (United States)

    Valsecchi, G. B.; Lega, E.; Froeschle, Cl.

    2011-01-01

    We study, both analytically and numerically, the perturbation induced by an encounter with a planet on a meteoroid stream. Our analytical tool is the extension of pik s theory of close encounters, that we apply to streams described by geocentric variables. The resulting formulae are used to compute the rate at which a stream is dispersed by planetary encounters into the sporadic background. We have verified the accuracy of the analytical model using a numerical test.

  5. Ultrasound-driven Viscous Streaming, Modelled via Momentum Injection

    Directory of Open Access Journals (Sweden)

    James PACKER

    2008-12-01

    Full Text Available Microfluidic devices can use steady streaming caused by the ultrasonic oscillation of one or many gas bubbles in a liquid to drive small scale flow. Such streaming flows are difficult to evaluate, as analytic solutions are not available for any but the simplest cases, and direct computational fluid dynamics models are unsatisfactory due to the large difference in flow velocity between the steady streaming and the leading order oscillatory motion. We develop a numerical technique which uses a two-stage multiscale computational fluid dynamics approach to find the streaming flow as a steady problem, and validate this model against experimental results.

  6. Scanning tunnelling microscope fabrication of phosphorus array in silicon for a nuclear spin quantum computer

    International Nuclear Information System (INIS)

    O'Brien, J.L.; Schofield, S.R.; Simmons, M.Y.; Clark, R.G.; Dzurak, A.S.; Prawer, S.; Adrienko, I.; Cimino, A.

    2000-01-01

    Full text: In the vigorous worldwide effort to experimentally build a quantum computer, recent intense interest has focussed on solid state approaches for their promise of scalability. Particular attention has been given to silicon-based proposals that can readily be integrated into conventional computing technology. For example the Kane design uses the well isolated nuclear spin of phosphorous donor nuclei (I=1/2) as the qubits embedded in isotopically pure 28 Si (I=0). We demonstrate the ability to fabricate a precise array of P atoms on a clean Si surface with atomic-scale resolution compatible with the fabrication of the Kane quantum computer

  7. The Ocean Observatories Initiative: Unprecedented access to real-time data streaming from the Cabled Array through OOI Cyberinfrastructure

    Science.gov (United States)

    Knuth, F.; Vardaro, M.; Belabbassi, L.; Smith, M. J.; Garzio, L. M.; Crowley, M. F.; Kerfoot, J.; Kawka, O. E.

    2016-02-01

    The National Science Foundation's Ocean Observatories Initiative (OOI), is a broad-scale, multidisciplinary facility that will transform oceanographic research by providing users with unprecedented access to long-term datasets from a variety of deployed physical, chemical, biological, and geological sensors. The Cabled Array component of the OOI, installed and operated by the University of Washington, is located on the Juan de Fuca tectonic plate off the coast of Oregon. It is a unique network of >100 cabled instruments and instrumented moorings transmitting data to shore in real-time via fiber optic technology. Instruments now installed include HD video and digital still cameras, mass spectrometers, a resistivity-temperature probe inside the orifice of a high-temperature hydrothermal vent, upward-looking ADCP's, pH and pC02 sensors, Horizontal Electrometer Pressure Inverted Echosounders and many others. Here, we present the technical aspects of data streaming from the Cabled Array through the OOI Cyberinfrastructure. We illustrate the types of instruments and data products available, data volume and density, processing levels and algorithms used, data delivery methods, file formats and access methods through the graphical user interface. Our goal is to facilitate the use and access to these unprecedented, co-registered oceanographic datasets. We encourage researchers to collaborate through the use of these simultaneous, interdisciplinary measurements, in the exploration of short-lived events (tectonic, volcanic, biological, severe storms), as well as long-term trends in ocean systems (circulation patterns, climate change, ocean acidity, ecosystem shifts).

  8. New Potentiometric Wireless Chloride Sensors Provide High Resolution Information on Chemical Transport Processes in Streams

    Directory of Open Access Journals (Sweden)

    Keith Smettem

    2017-07-01

    Full Text Available Quantifying the travel times, pathways, and dispersion of solutes moving through stream environments is critical for understanding the biogeochemical cycling processes that control ecosystem functioning. Validation of stream solute transport and exchange process models requires data obtained from in-stream measurement of chemical concentration changes through time. This can be expensive and time consuming, leading to a need for cheap distributed sensor arrays that respond instantly and record chemical transport at points of interest on timescales of seconds. To meet this need we apply new, low-cost (in the order of a euro per sensor potentiometric chloride sensors used in a distributed array to obtain data with high spatial and temporal resolution. The application here is to monitoring in-stream hydrodynamic transport and dispersive mixing of an injected chemical, in this case NaCl. We present data obtained from the distributed sensor array under baseflow conditions for stream reaches in Luxembourg and Western Australia. The reaches were selected to provide a range of increasingly complex in-channel flow patterns. Mid-channel sensor results are comparable to data obtained from more expensive electrical conductivity meters, but simultaneous acquisition of tracer data at several positions across the channel allows far greater spatial resolution of hydrodynamic mixing processes and identification of chemical ‘dead zones’ in the study reaches.

  9. Introduction to stream: An Extensible Framework for Data Stream Clustering Research with R

    Directory of Open Access Journals (Sweden)

    Michael Hahsler

    2017-02-01

    Full Text Available In recent years, data streams have become an increasingly important area of research for the computer science, database and statistics communities. Data streams are ordered and potentially unbounded sequences of data points created by a typically non-stationary data generating process. Common data mining tasks associated with data streams include clustering, classification and frequent pattern mining. New algorithms for these types of data are proposed regularly and it is important to evaluate them thoroughly under standardized conditions. In this paper we introduce stream, a research tool that includes modeling and simulating data streams as well as an extensible framework for implementing, interfacing and experimenting with algorithms for various data stream mining tasks. The main advantage of stream is that it seamlessly integrates with the large existing infrastructure provided by R. In addition to data handling, plotting and easy scripting capabilities, R also provides many existing algorithms and enables users to interface code written in many programming languages popular among data mining researchers (e.g., C/C++, Java and Python. In this paper we describe the architecture of stream and focus on its use for data stream clustering research. stream was implemented with extensibility in mind and will be extended in the future to cover additional data stream mining tasks like classification and frequent pattern mining.

  10. Field programmable gate array-assigned complex-valued computation and its limits

    Energy Technology Data Exchange (ETDEWEB)

    Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria); Zwick, Wolfgang; Klier, Jochen [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Wenzel, Lothar [National Instruments, 11500 N MOPac Expy, Austin, Texas 78759 (United States); Gröschl, Martin [Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria)

    2014-09-15

    We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.

  11. Computing Diameter in the Streaming and Sliding-Window Models (Preprint)

    National Research Council Canada - National Science Library

    Feigenbaum, Joan; Kannan, Sampath; Zhang, Jian

    2002-01-01

    We investigate the diameter problem in the streaming and sliding-window models. We show that, for a stream of n points or a sliding window of size n, any exact algorithm for diameter requires Omega(n) bits of space...

  12. Manycore Performance-Portability: Kokkos Multidimensional Array Library

    Directory of Open Access Journals (Sweden)

    H. Carter Edwards

    2012-01-01

    Full Text Available Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs, and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1 manycore compute devices each with its own memory space, (2 data parallel kernels and (3 multidimensional arrays. Kernel execution performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1 separating data access patterns from computational kernels through a multidimensional array API and (2 introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].

  13. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  14. Knowledge discovery from data streams

    CERN Document Server

    Gama, Joao

    2010-01-01

    Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams.The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks,

  15. Streaming from the Equator of a Drop in an External Electric Field.

    Science.gov (United States)

    Brosseau, Quentin; Vlahovska, Petia M

    2017-07-21

    Tip streaming generates micron- and submicron-sized droplets when a thin thread pulled from the pointy end of a drop disintegrates. Here, we report streaming from the equator of a drop placed in a uniform electric field. The instability generates concentric fluid rings encircling the drop, which break up to form an array of microdroplets in the equatorial plane. We show that the streaming results from an interfacial instability at the stagnation line of the electrohydrodynamic flow, which creates a sharp edge. The flow draws from the equator a thin sheet which destabilizes and sheds fluid cylinders. This streaming phenomenon provides a new route for generating monodisperse microemulsions.

  16. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  17. Dependently typed array programs don’t go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2009-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  18. Dependently typed array programs don't go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2008-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  19. Explaining the "Pulse of Protoplasm": the search for molecular mechanisms of protoplasmic streaming.

    Science.gov (United States)

    Dietrich, Michael R

    2015-01-01

    Explanations for protoplasmic streaming began with appeals to contraction in the eighteenth century and ended with appeals to contraction in the twentieth. During the intervening years, biologists proposed a diverse array of mechanisms for streaming motions. This paper focuses on the re-emergence of contraction among the molecular mechanisms proposed for protoplasmic streaming during the twentieth century. The revival of contraction is a result of a broader transition from colloidal chemistry to a macromolecular approach to the chemistry of proteins, the recognition of the phenomena of shuttle streaming and the pulse of protoplasm, and the influential analogy between protoplasmic streaming and muscle contraction. © 2014 Institute of Botany, Chinese Academy of Sciences.

  20. Testing of focal plane arrays

    International Nuclear Information System (INIS)

    Merriam, J.D.

    1988-01-01

    Problems associated with the testing of focal plane arrays are briefly examined with reference to the instrumentation and measurement procedures. In particular, the approach and instrumentation used as the Naval Ocean Systems Center is presented. Most of the measurements are made with flooded illumination on the focal plane array. The array is treated as an ensemble of individual pixels, data being taken on each pixel and array averages and standard deviations computed for the entire array. Data maps are generated, showing the pixel data in the proper spatial position on the array and the array statistics

  1. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

    Energy Technology Data Exchange (ETDEWEB)

    Robert W. Numrich

    2008-04-22

    The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to

  2. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  3. Checking for Circular Dependencies in Distributed Stream Programs

    Science.gov (United States)

    2011-08-29

    extensions to express new complexities more conve- nient. Teleport messaging ( TMG ) in the StreamIt language [30] is an example. 1.1 StreamIt Language...dynamicities to an FIR computation Thies et al. in [30] give a TMG model for distributed stream pro- grams. TMG is a mechanism that implements control...messages for stream graphs. The TMG mechanism is designed not to interfere with original dataflow graphs’ structures and scheduling, therefore a key

  4. Dense Array Optimization of Cross-Flow Turbines

    Science.gov (United States)

    Scherl, Isabel; Strom, Benjamin; Brunton, Steven; Polagye, Brian

    2017-11-01

    Cross-flow turbines, where the axis of rotation is perpendicular to the freestream flow, can be used to convert the kinetic energy in wind or water currents to electrical power. By taking advantage of mean and time-resolved wake structures, the optimal density of an array of cross-flow turbines has the potential for higher power output per unit area of land or sea-floor than an equivalent array of axial-flow turbines. In addition, dense arrays in tidal or river channels may be able to further elevate efficiency by exploiting flow confinement and surface proximity. In this work, a two-turbine array is optimized experimentally in a recirculating water channel. The spacing between turbines, as well as individual and coordinated turbine control strategies are optimized. Array efficiency is found to exceed the maximum efficiency for a sparse array (i.e., no interaction between turbines) for stream-wise rotor spacing of less than two diameters. Results are discussed in the context of wake measurements made behind a single rotor.

  5. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    Science.gov (United States)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  6. Doublet III neutral beam multi-stream command language system

    International Nuclear Information System (INIS)

    Campbell, L.; Garcia, J.R.

    1983-12-01

    A multi-stream command language system was developed to provide control of the dual source neutral beam injectors on the Doublet III experiment at GA Technologies Inc. The Neutral Beam command language system consists of three parts: compiler, sequencer, and interactive task. The command language, which was derived from the Doublet III tokamak command language, POPS, is compiled, using a recursive descent compiler, into reverse polish notation instructions which then can be executed by the sequencer task. The interactive task accepts operator commands via a keyboard. The interactive task directs the operation of three input streams, creating commands which are then executed by the sequencer. The streams correspond to the two sources within a Doublet III neutral beam, plus an interactive stream. The sequencer multiplexes the execution of instructions from these three streams. The instructions include reads and writes to an operator terminal, arithmetic computations, intrinsic functions such as CAMAC input and output, and logical instructions. The neutral beam command language system was implemented using Modular Computer Systems (ModComp) Pascal and consists of two tasks running on a ModComp Classic IV computer

  7. The FPGA Pixel Array Detector

    International Nuclear Information System (INIS)

    Hromalik, Marianne S.; Green, Katherine S.; Philipp, Hugh T.; Tate, Mark W.; Gruner, Sol M.

    2013-01-01

    A proposed design for a reconfigurable x-ray Pixel Array Detector (PAD) is described. It operates by integrating a high-end commercial field programmable gate array (FPGA) into a 3-layer device along with a high-resistivity diode detection layer and a custom, application-specific integrated circuit (ASIC) layer. The ASIC layer contains an energy-discriminating photon-counting front end with photon hits streamed directly to the FPGA via a massively parallel, high-speed data connection. FPGA resources can be allocated to perform user defined tasks on the pixel data streams, including the implementation of a direct time autocorrelation function (ACF) with time resolution down to 100 ns. Using the FPGA at the front end to calculate the ACF reduces the required data transfer rate by several orders of magnitude when compared to a fast framing detector. The FPGA-ASIC high-speed interface, as well as the in-FPGA implementation of a real-time ACF for x-ray photon correlation spectroscopy experiments has been designed and simulated. A 16×16 pixel prototype of the ASIC has been fabricated and is being tested. -- Highlights: ► We describe the novelty and need for the FPGA Pixel Array Detector. ► We describe the specifications and design of the Diode, ASIC and FPGA layers. ► We highlight the Autocorrelation Function (ACF) for speckle as an example application. ► Simulated FPGA output calculates the ACF for different input bitstreams to 100 ns. ► Reduced data transfer rate by 640× and sped up real-time ACF by 100× other methods.

  8. The Magellanic Stream and debris clouds

    Energy Technology Data Exchange (ETDEWEB)

    For, B.-Q.; Staveley-Smith, L. [International Centre for Radio Astronomy Research, University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009 (Australia); Matthews, D. [Centre for Materials and Surface Science, La Trobe University, Melbourne, VIC 3086 (Australia); McClure-Griffiths, N. M., E-mail: biqing.for@icrar.org [CSIRO Astronomy and Space Science, Epping, NSW 1710 (Australia)

    2014-09-01

    We present a study of the discrete clouds and filaments in the Magellanic Stream using a new high-resolution survey of neutral hydrogen (H I) conducted with the H75 array of the Australia Telescope Compact Array, complemented by single-dish data from the Parkes Galactic All-Sky Survey. From the individual and combined data sets, we have compiled a catalog of 251 clouds and listed their basic parameters, including a morphological description useful for identifying cloud interactions. We find an unexpectedly large number of head-tail clouds in the region. The implication for the formation mechanism and evolution is discussed. The filaments appear to originate entirely from the Small Magellanic Cloud and extend into the northern end of the Magellanic Bridge.

  9. Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array.

    Science.gov (United States)

    Navruz, Isa; Coskun, Ahmet F; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan

    2013-10-21

    We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ~9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ~3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also removes spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears.

  10. StreamQRE: Modular Specification and Efficient Evaluation of Quantitative Queries over Streaming Data.

    Science.gov (United States)

    Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G; Khanna, Sanjeev

    2017-06-01

    Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.

  11. Verification of computed tomographic estimates of cochlear implant array position: a micro-CT and histologic analysis.

    Science.gov (United States)

    Teymouri, Jessica; Hullar, Timothy E; Holden, Timothy A; Chole, Richard A

    2011-08-01

    To determine the efficacy of clinical computed tomographic (CT) imaging to verify postoperative electrode array placement in cochlear implant (CI) patients. Nine fresh cadaver heads underwent clinical CT scanning, followed by bilateral CI insertion and postoperative clinical CT scanning. Temporal bones were removed, trimmed, and scanned using micro-CT. Specimens were then dehydrated, embedded in either methyl methacrylate or LR White resin, and sectioned with a diamond wafering saw. Histology sections were examined by 3 blinded observers to determine the position of individual electrodes relative to soft tissue structures within the cochlea. Electrodes were judged to be within the scala tympani, scala vestibuli, or in an intermediate position between scalae. The position of the array could be estimated accurately from clinical CT scans in all specimens using micro-CT and histology as a criterion standard. Verification using micro-CT yielded 97% agreement, and histologic analysis revealed 95% agreement with clinical CT results. A composite, 3-dimensional image derived from a patient's preoperative and postoperative CT images using a clinical scanner accurately estimates the position of the electrode array as determined by micro-CT imaging and histologic analyses. Information obtained using the CT method provides valuable insight into numerous variables of interest to patient performance such as surgical technique, array design, and processor programming and troubleshooting.

  12. Scalar localization by cone-beam computed tomography of cochlear implant carriers: a comparative study between straight and periomodiolar precurved electrode arrays.

    Science.gov (United States)

    Boyer, Eric; Karkas, Alexandre; Attye, Arnaud; Lefournier, Virginie; Escude, Bernard; Schmerber, Sebastien

    2015-03-01

    To compare the incidence of dislocation of precurved versus straight flexible cochlear implant electrode arrays using cone-beam computed tomography (CBCT) image analyses. Consecutive nonrandomized case-comparison study. Tertiary referral center. Analyses of patients' CBCT images after cochlear implant surgery. Precurved and straight flexible electrode arrays from two different manufacturers were implanted. A round window insertion was performed in most cases. Two cases necessitated a cochleostomy. The patients' CBCT images were reconstructed in the coronal oblique, sagittal oblique, and axial oblique section. The insertion depth angle and the incidence of dislocation from the scala tympani to the scala vestibuli were determined. The CBCT images and the incidence of dislocation were analyzed in 54 patients (61 electrode arrays). Thirty-one patients were implanted with a precurved perimodiolar electrode array and 30 patients with a straight flexible electrode array. A total of nine (15%) scalar dislocations were observed in both groups. Eight (26%) scalar dislocations were observed in the precurved array group and one (3%) in the straight array group. Dislocation occurred at an insertion depth angle between 170 and 190 degrees in the precurved array group and at approximately 370 degrees in the straight array group. With precurved arrays, dislocation usually occurs in the ascending part of the basal turn of the cochlea. With straight flexible electrode arrays, the incidence of dislocation was lower, and it seems that straight flexible arrays have a higher chance of a confined position within the scala tympani than perimodiolar precurved arrays.

  13. Optimal array factor radiation pattern synthesis for linear antenna array using cat swarm optimization: validation by an electromagnetic simulator

    Institute of Scientific and Technical Information of China (English)

    Gopi RAM; Durbadal MANDAL; Sakti Prasad GHOSHAL; Rajib KAR

    2017-01-01

    In this paper, an optimal design of linear antenna arrays having microstrip patch antenna elements has been carried out. Cat swarm optimization (CSO) has been applied for the optimization of the control parameters of radiation pattern of an antenna array. The optimal radiation patterns of isotropic antenna elements are obtained by optimizing the current excitation weight of each element and the inter-element spacing. The antenna arrays of 12, 16, and 20 elements are taken as examples. The arrays are de-signed by using MATLAB computation and are validated through Computer Simulation Technology-Microwave Studio (CST-MWS). From the simulation results it is evident that CSO is able to yield the optimal design of linear antenna arrays of patch antenna elements.

  14. Degree-of-Freedom Strengthened Cascade Array for DOD-DOA Estimation in MIMO Array Systems.

    Science.gov (United States)

    Yao, Bobin; Dong, Zhi; Zhang, Weile; Wang, Wei; Wu, Qisheng

    2018-05-14

    In spatial spectrum estimation, difference co-array can provide extra degrees-of-freedom (DOFs) for promoting parameter identifiability and parameter estimation accuracy. For the sake of acquiring as more DOFs as possible with a given number of physical sensors, we herein design a novel sensor array geometry named cascade array. This structure is generated by systematically connecting a uniform linear array (ULA) and a non-uniform linear array, and can provide more DOFs than some exist array structures but less than the upper-bound indicated by minimum redundant array (MRA). We further apply this cascade array into multiple input multiple output (MIMO) array systems, and propose a novel joint direction of departure (DOD) and direction of arrival (DOA) estimation algorithm, which is based on a reduced-dimensional weighted subspace fitting technique. The algorithm is angle auto-paired and computationally efficient. Theoretical analysis and numerical simulations prove the advantages and effectiveness of the proposed array structure and the related algorithm.

  15. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model.

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Lin, Chia-Hung; Li, Chien-Ming

    2018-02-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500-700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility.

  16. Alignment data streams for the ATLAS inner detector

    CERN Document Server

    Pinto, B; Pereira, P; Elsing, M; Hawkings, R; Schieck, J; García, S; Schaffer, A; Ma, H; Anjos, A

    2008-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to reduce the Event Filter rate output, down to a level that allows the storage and processing of these data. These concepts are described in the ATLAS Computing Model which embraces Grid paradigm. The output coming from the Event Filter consists of four main streams: physical stream, express stream, calibration stream, and diagnostic stream. The calibration stream will be transferred to the Tier-0 facilities that will provide the prompt reconstruction of this stream with a minimum latency of 8 hours, producing calibration constants of sufficient quality to allow a first-pass processing. The Inner Detector community is developing and testing an independent common calibration stream selected at the Event Filter after track reconstruction. It is composed of raw data, in byte-stream format, contained in Readout Buffers (ROBs) with hit information of the selected tracks, and it will be used to derive and update a set of calibration and alignment cons...

  17. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  18. Cellular Subcompartments through Cytoplasmic Streaming.

    Science.gov (United States)

    Pieuchot, Laurent; Lai, Julian; Loh, Rachel Ann; Leong, Fong Yew; Chiam, Keng-Hwee; Stajich, Jason; Jedd, Gregory

    2015-08-24

    Cytoplasmic streaming occurs in diverse cell types, where it generally serves a transport function. Here, we examine streaming in multicellular fungal hyphae and identify an additional function wherein regimented streaming forms distinct cytoplasmic subcompartments. In the hypha, cytoplasm flows directionally from cell to cell through septal pores. Using live-cell imaging and computer simulations, we identify a flow pattern that produces vortices (eddies) on the upstream side of the septum. Nuclei can be immobilized in these microfluidic eddies, where they form multinucleate aggregates and accumulate foci of the HDA-2 histone deacetylase-associated factor, SPA-19. Pores experiencing flow degenerate in the absence of SPA-19, suggesting that eddy-trapped nuclei function to reinforce the septum. Together, our data show that eddies comprise a subcellular niche favoring nuclear differentiation and that subcompartments can be self-organized as a consequence of regimented cytoplasmic streaming. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Computer program SCAP-BR for gamma-ray streaming through multi-legged ducts

    International Nuclear Information System (INIS)

    Byoun, T.Y.; Babel, P.J.; Dajani, A.T.

    1977-01-01

    A computer program, SCAP-BR, has been developed at Burns and Roe for the gamma-ray streaming analysis through multi-legged ducts. SCAP-BR is a modified version of the single scattering code, SCAP, incorporating capabilities of handling multiple scattering and volumetric source geometries. It utilizes the point kernel integration method to calculate both the line-of-sight and scattered gamma dose rates by employing the ray tracing technique through complex shield geometries. The multiple scattering is handled by a repeated process of the single scatter method through each successive scatter region and collapsed pseudo source meshes constructed on the relative coordinate systems. The SCAP-BR results have been compared with experimental data for a Z-type (three-legged) concrete duct with a Co-60 source placed at the duct entrance point. The SCAP-BR dose rate predictions along the duct axis demonstrate an excellent agreement with the measured values

  20. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  1. Asymmetrical floating point array processors, their application to exploration and exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Geriepy, B L

    1983-01-01

    An asymmetrical floating point array processor is a special-purpose scientific computer which operates under asymmetrical control of a host computer. Although an array processor can receive fixed point input and produce fixed point output, its primary mode of operation is floating point. The first generation of array processors was oriented towards time series information. The next generation of array processors has proved much more versatile and their applicability ranges from petroleum reservoir simulation to speech syntheses. Array processors are becoming commonplace in mining, the primary usage being construction of grids-by usual methods or by kriging. The Australian mining community is among the world's leaders in regard to computer-assisted exploration and exploitation systems. Part of this leadership role must be providing guidance to computer vendors in regard to current and future requirements.

  2. Plasma dynamics in aluminium wire array Z-pinch implosions

    International Nuclear Information System (INIS)

    Bland, S.N.

    2001-01-01

    The wire array Z-pinch is the world's most powerful laboratory X-ray source. An achieved power of ∼280TW has generated great interest in the use of these devices as a source of hohlraum heating for inertial confinement fusion experiments. However, the physics underlying how wire array Z-pinches implode is not well understood. This thesis presents the first detailed measurements of plasma dynamics in wire array experiments. The MAGPIE generator, with currents of up to 1.4MA, 150ns 10-90% rise-time, was used to implode arrays of 16mm diameter typically containing between 8 and 64 15μm aluminium wires. Diagnostics included: end and side-on laser probing with interferometry, schlieren and shadowgraphy channels; radial and axial streak photography; gated X-ray imaging; XUV and hard X-ray spectrometry; filtered XRDs and diamond PCDs; and a novel X-ray backlighting system to probe high density plasma. It was found that the plasma formed from the wires consisted of cold, dense cores, which ablated producing hot, low density coronal plasma. After an initial acceleration around the cores, coronal plasma streams flowed force-free towards the axis, with an instability wavelength determined by the core size. At ∼50% of the implosion time, the streams collided on axis forming a precursor plasma which appeared to be uniform, stable, and inertially confined. The existence of core-corona structure significantly affected implosion dynamics. For arrays with <64 wires, the wire cores remained in their original positions until ∼80% of the implosion time before accelerating rapidly. At 64 wires a transition in implosion trajectories to 0-D like occurred indicating a possible merger of current carrying plasma close to the cores - the cores themselves did not merge. During implosion, the cores initially developed uncorrelated instabilities that then transformed into a longer wavelength global mode of instability. The study of nested arrays (2 concentric arrays, one inside the other

  3. Doublet III neutral beam multi-stream command language system

    International Nuclear Information System (INIS)

    Campbell, L.; Garcia, J.R.

    1983-01-01

    A multi-stream command language system was developed to provide control of the dual source neutral beam injectors on the Doublet III experiment at GA Technologies Inc. The Neutral Beam command language system consists of three parts: compiler, sequencer, and interactive task. The command language, which was derived from the Doublet III tokamak command language, POPS, is compiled, using a recursive descent compiler, into reverse polish notation instructions which then can be executed by the sequencer task. The interactive task accepts operator commands via a keyboard. The interactive task directs the operation of three input streams, creating commands which are then executed by the sequencer. The streams correspond to the two sources within a Doublet III neutral beam, plus an interactive stream. The sequencer multiplexes the execution of instructions from these three streams. The instructions include reads and writes to an operator terminal, arithmetic computations, intrinsic functions such as CAMAC input and output, and logical instructions. The neutral beam command language system was implemented using Modular Computer Systems (ModComp) Pascal and consists of two tasks running on a ModComp Classic IV computer. The two tasks, the interactive and the sequencer, run independently and communicate using shared memory regions. The compiler runs as an overlay to the interactive task when so directed by operator commands. The system is succesfully being used to operate the three neutral beams on Doublet III

  4. Diversity of acoustic streaming in a rectangular acoustofluidic field.

    Science.gov (United States)

    Tang, Qiang; Hu, Junhui

    2015-04-01

    Diversity of acoustic streaming field in a 2D rectangular chamber with a traveling wave and using water as the acoustic medium is numerically investigated by the finite element method. It is found that the working frequency, the vibration excitation source length, and the distance and phase difference between two separated symmetric vibration excitation sources can cause the diversity in the acoustic streaming pattern. It is also found that a small object in the acoustic field results in an additional eddy, and affects the eddy size in the acoustic streaming field. In addition, the computation results show that with an increase of the acoustic medium's temperature, the speed of the main acoustic streaming decreases first and then increases, and the angular velocity of the corner eddies increases monotonously, which can be clearly explained by the change of the acoustic dissipation factor and shearing viscosity of the acoustic medium with temperature. Commercialized FEM software COMSOL Multiphysics is used to implement the computation tasks, which makes our method very easy to use. And the computation method is partially verified by an established analytical solution. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. A New Streamflow-Routing (SFR1) Package to Simulate Stream-Aquifer Interaction with MODFLOW-2000

    Science.gov (United States)

    Prudic, David E.; Konikow, Leonard F.; Banta, Edward R.

    2004-01-01

    The increasing concern for water and its quality require improved methods to evaluate the interaction between streams and aquifers and the strong influence that streams can have on the flow and transport of contaminants through many aquifers. For this reason, a new Streamflow-Routing (SFR1) Package was written for use with the U.S. Geological Survey's MODFLOW-2000 ground-water flow model. The SFR1 Package is linked to the Lake (LAK3) Package, and both have been integrated with the Ground-Water Transport (GWT) Process of MODFLOW-2000 (MODFLOW-GWT). SFR1 replaces the previous Stream (STR1) Package, with the most important difference being that stream depth is computed at the midpoint of each reach instead of at the beginning of each reach, as was done in the original Stream Package. This approach allows for the addition and subtraction of water from runoff, precipitation, and evapotranspiration within each reach. Because the SFR1 Package computes stream depth differently than that for the original package, a different name was used to distinguish it from the original Stream (STR1) Package. The SFR1 Package has five options for simulating stream depth and four options for computing diversions from a stream. The options for computing stream depth are: a specified value; Manning's equation (using a wide rectangular channel or an eight-point cross section); a power equation; or a table of values that relate flow to depth and width. Each stream segment can have a different option. Outflow from lakes can be computed using the same options. Because the wetted perimeter is computed for the eight-point cross section and width is computed for the power equation and table of values, the streambed conductance term no longer needs to be calculated externally whenever the area of streambed changes as a function of flow. The concentration of solute is computed in a stream network when MODFLOW-GWT is used in conjunction with the SFR1 Package. The concentration of a solute in a

  6. Pollutant transport in natural streams

    International Nuclear Information System (INIS)

    Buckner, M.R.; Hayes, D.W.

    1975-01-01

    A mathematical model has been developed to estimate the downstream effect of chemical and radioactive pollutant releases to tributary streams and rivers. The one-dimensional dispersion model was employed along with a dead zone model to describe stream transport behavior. Options are provided for sorption/desorption, ion exchange, and particle deposition in the river. The model equations are solved numerically by the LODIPS computer code. The solution method was verified by application to actual and simulated releases of radionuclides and other chemical pollutants. (U.S.)

  7. Brain computer interface learning for systems based on electrocorticography and intracortical microelectrode arrays.

    Science.gov (United States)

    Hiremath, Shivayogi V; Chen, Weidong; Wang, Wei; Foldes, Stephen; Yang, Ying; Tyler-Kabara, Elizabeth C; Collinger, Jennifer L; Boninger, Michael L

    2015-01-01

    A brain-computer interface (BCI) system transforms neural activity into control signals for external devices in real time. A BCI user needs to learn to generate specific cortical activity patterns to control external devices effectively. We call this process BCI learning, and it often requires significant effort and time. Therefore, it is important to study this process and develop novel and efficient approaches to accelerate BCI learning. This article reviews major approaches that have been used for BCI learning, including computer-assisted learning, co-adaptive learning, operant conditioning, and sensory feedback. We focus on BCIs based on electrocorticography and intracortical microelectrode arrays for restoring motor function. This article also explores the possibility of brain modulation techniques in promoting BCI learning, such as electrical cortical stimulation, transcranial magnetic stimulation, and optogenetics. Furthermore, as proposed by recent BCI studies, we suggest that BCI learning is in many ways analogous to motor and cognitive skill learning, and therefore skill learning should be a useful metaphor to model BCI learning.

  8. Continuous Distributed Top-k Monitoring over High-Speed Rail Data Stream in Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Hanning Wang

    2013-01-01

    Full Text Available In the environment of cloud computing, real-time mass data about high-speed rail which is based on the intense monitoring of large scale perceived equipment provides strong support for the safety and maintenance of high-speed rail. In this paper, we focus on the Top-k algorithm of continuous distribution based on Multisource distributed data stream for high-speed rail monitoring. Specifically, we formalized Top-k monitoring model of high-speed rail and proposed DTMR that is the Top-k monitoring algorithm with random, continuous, or strictly monotone aggregation functions. The DTMR was proved to be valid by lots of experiments.

  9. Research and implementation on improving I/O performance of streaming media storage system

    Science.gov (United States)

    Lu, Zheng-wu; Wang, Yu-de; Jiang, Guo-song

    2008-12-01

    In this paper, we study the special requirements of a special storage system: streaming media server, and propose a solution to improve I/O performance of RAID storage system. The solution is suitable for streaming media applications. A streaming media storage subsystem includes the I/O interfaces, RAID arrays, I/O scheduling and device drivers. The solution is implemented on the top of the storage subsystem I/O Interface. Storage subsystem is the performance bottlenecks of a streaming media system, and I/O interface directly affect the performance of the storage subsystem. According to theoretical analysis, 64 KB block-size is most appropriate for streaming media applications. We carry out experiment in detail, and verified that the proper block-size really is 64KB. It is in accordance with our analysis. The experiment results also show that by using DMA controller, efficient memory management technology and mailbox interface design mechanism, streaming media storage system achieves a high-speed data throughput.

  10. Fast Streaming 3D Level set Segmentation on the GPU for Smooth Multi-phase Segmentation

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2011-01-01

    Level set method based segmentation provides an efficient tool for topological and geometrical shape handling, but it is slow due to high computational burden. In this work, we provide a framework for streaming computations on large volumetric images on the GPU. A streaming computational model...

  11. Streaming potential of superhydrophobic microchannels.

    Science.gov (United States)

    Park, Hung Mok; Kim, Damoa; Kim, Se Young

    2017-03-01

    For the purpose of gaining larger streaming potential, it has been suggested to employ superhydrophobic microchannels with a large velocity slip. There are two kinds of superhydrophobic surfaces, one having a smooth wall with a large Navier slip coefficient caused by the hydrophobicity of the wall material, and the other having a periodic array of no- shear slots of air pockets embedded in a nonslip wall. The electrokinetic flows over these two superhydrophobic surfaces are modelled using the Navier-Stokes equation and convection-diffusion equations of the ionic species. The Navier slip coefficient of the first kind surfaces and the no-shear slot ratio of the second kind surfaces are similar in the sense that the volumetric flow rate increases as these parameter values increase. However, although the streaming potential increases monotonically with respect to the Navier slip coefficient, it reaches a maximum and afterward decreases as the no-shear ratio increases. The results of the present investigation imply that the characterization of superhydrophobic surfaces employing only the measurement of volumetric flow rate against pressure drop is not appropriate and the fine structure of the superhydrophobic surfaces must be verified before predicting the streaming potential and electrokinetic flows accurately. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Improved SNR of phased-array PERES coils via simulation study

    International Nuclear Information System (INIS)

    RodrIguez, Alfredo O; Medina, LucIa

    2005-01-01

    A computational comparison of signal-to-noise ratio (SNR) was performed between a conventional phased array of two circular-shaped coils and a petal resonator surface array. The quasi-static model and phased-array optimum SNR were combined to derive an SNR formula for each array. Analysis of mutual inductance between coil petals was carried out to compute the optimal coil separation and optimum number of petal coils. Mutual interaction between coil arrays was not included in the model because this does not drastically affect coil performance. Phased arrays of PERES coils show a 114% improvement in SNR over that of the simplest circular configuration. (note)

  13. A Mechanism for Cytoplasmic Streaming: Kinesin-Driven Alignment of Microtubules and Fast Fluid Flows.

    Science.gov (United States)

    Monteith, Corey E; Brunner, Matthew E; Djagaeva, Inna; Bielecki, Anthony M; Deutsch, Joshua M; Saxton, William M

    2016-05-10

    The transport of cytoplasmic components can be profoundly affected by hydrodynamics. Cytoplasmic streaming in Drosophila oocytes offers a striking example. Forces on fluid from kinesin-1 are initially directed by a disordered meshwork of microtubules, generating minor slow cytoplasmic flows. Subsequently, to mix incoming nurse cell cytoplasm with ooplasm, a subcortical layer of microtubules forms parallel arrays that support long-range, fast flows. To analyze the streaming mechanism, we combined observations of microtubule and organelle motions with detailed mathematical modeling. In the fast state, microtubules tethered to the cortex form a thin subcortical layer and undergo correlated sinusoidal bending. Organelles moving in flows along the arrays show velocities that are slow near the cortex and fast on the inward side of the subcortical microtubule layer. Starting with fundamental physical principles suggested by qualitative hypotheses, and with published values for microtubule stiffness, kinesin velocity, and cytoplasmic viscosity, we developed a quantitative coupled hydrodynamic model for streaming. The fully detailed mathematical model and its simulations identify key variables that can shift the system between disordered (slow) and ordered (fast) states. Measurements of array curvature, wave period, and the effects of diminished kinesin velocity on flow rates, as well as prior observations on f-actin perturbation, support the model. This establishes a concrete mechanistic framework for the ooplasmic streaming process. The self-organizing fast phase is a result of viscous drag on kinesin-driven cargoes that mediates equal and opposite forces on cytoplasmic fluid and on microtubules whose minus ends are tethered to the cortex. Fluid moves toward plus ends and microtubules are forced backward toward their minus ends, resulting in buckling. Under certain conditions, the buckling microtubules self-organize into parallel bending arrays, guiding varying directions

  14. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim

    2016-05-11

    Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The

  15. High-channel-count, high-density microelectrode array for closed-loop investigation of neuronal networks.

    Science.gov (United States)

    Tsai, David; John, Esha; Chari, Tarun; Yuste, Rafael; Shepard, Kenneth

    2015-01-01

    We present a system for large-scale electrophysiological recording and stimulation of neural tissue with a planar topology. The recording system has 65,536 electrodes arranged in a 256 × 256 grid, with 25.5 μm pitch, and covering an area approximately 42.6 mm(2). The recording chain has 8.66 μV rms input-referred noise over a 100 ~ 10k Hz bandwidth while providing up to 66 dB of voltage gain. When recording from all electrodes in the array, it is capable of 10-kHz sampling per electrode. All electrodes can also perform patterned electrical microstimulation. The system produces ~ 1 GB/s of data when recording from the full array. To handle, store, and perform nearly real-time analyses of this large data stream, we developed a framework based around Xilinx FPGAs, Intel x86 CPUs and the NVIDIA Streaming Multiprocessors to interface with the electrode array.

  16. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Li, Chien-Ming

    2018-01-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500–700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility. PMID:29515815

  17. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    Science.gov (United States)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  18. Supercomputers and parallel computation. Based on the proceedings of a workshop on progress in the use of vector and array processors organised by the Institute of Mathematics and its Applications and held in Bristol, 2-3 September 1982

    International Nuclear Information System (INIS)

    Paddon, D.J.

    1984-01-01

    This book is based on the proceedings of a conference on parallel computing held in 1982. There are 18 papers which cover the following topics: VLSI parallel architectures, the theory of parallel computing and vector and array processor computing. One paper on 'Tough Problems in Reactor Design' is indexed separately. All the contributions are on research done in the United Kingdom. Although much of the experience in array processor computing is associated with the ICL distributed array processor (DAP) and this is reflected in the contributions, the research relating to the ICL DAP is relevant to all types of array processors. (UK)

  19. Streaming simplification of tetrahedral meshes.

    Science.gov (United States)

    Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T

    2007-01-01

    Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.

  20. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  1. Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle

    Directory of Open Access Journals (Sweden)

    Junpeng Shi

    2017-02-01

    Full Text Available In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS method for two-dimensional direction of arrival (2D DOA estimation with uniform rectangular arrays (URAs in a low-grazing angle (LGA condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.

  2. Gamma streaming experiments for validation of Monte Carlo code

    International Nuclear Information System (INIS)

    Thilagam, L.; Mohapatra, D.K.; Subbaiah, K.V.; Iliyas Lone, M.; Balasubramaniyan, V.

    2012-01-01

    In-homogeneities in shield structures lead to considerable amount of leakage radiation (streaming) increasing the radiation levels in accessible areas. Development works on experimental as well as computational methods for quantifying this streaming radiation are still continuing. Monte Carlo based radiation transport code, MCNP is usually a tool for modeling and analyzing such problems involving complex geometries. In order to validate this computational method for streaming analysis, it is necessary to carry out some experimental measurements simulating these inhomogeneities like ducts and voids present in the bulk shields for typical cases. The data thus generated will be analysed by simulating the experimental set up employing MCNP code and optimized input parameters for the code in finding solutions for similar radiation streaming problems will be formulated. Comparison of experimental data obtained from radiation streaming experiments through ducts will give a set of thumb rules and analytical fits for total radiation dose rates within and outside the duct. The present study highlights the validation of MCNP code through the gamma streaming experiments carried out with the ducts of various shapes and dimensions. Over all, the present study throws light on suitability of MCNP code for the analysis of gamma radiation streaming problems for all duct configurations considered. In the present study, only dose rate comparisons have been made. Studies on spectral comparison of streaming radiation are in process. Also, it is planned to repeat the experiments with various shield materials. Since the penetrations and ducts through bulk shields are unavoidable in an operating nuclear facility the results on this kind of radiation streaming simulations and experiments will be very useful in the shield structure optimization without compromising the radiation safety

  3. A Distributed Flocking Approach for Information Stream Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL

    2006-01-01

    Intelligence analysts are currently overwhelmed with the amount of information streams generated everyday. There is a lack of comprehensive tool that can real-time analyze the information streams. Document clustering analysis plays an important role in improving the accuracy of information retrieval. However, most clustering technologies can only be applied for analyzing the static document collection because they normally require a large amount of computation resource and long time to get accurate result. It is very difficult to cluster a dynamic changed text information streams on an individual computer. Our early research has resulted in a dynamic reactive flock clustering algorithm which can continually refine the clustering result and quickly react to the change of document contents. This character makes the algorithm suitable for cluster analyzing dynamic changed document information, such as text information stream. Because of the decentralized character of this algorithm, a distributed approach is a very natural way to increase the clustering speed of the algorithm. In this paper, we present a distributed multi-agent flocking approach for the text information stream clustering and discuss the decentralized architectures and communication schemes for load balance and status information synchronization in this approach.

  4. Drug perfusion enhancement in tissue model by steady streaming induced by oscillating microbubbles.

    Science.gov (United States)

    Oh, Jin Sun; Kwon, Yong Seok; Lee, Kyung Ho; Jeong, Woowon; Chung, Sang Kug; Rhee, Kyehan

    2014-01-01

    Drug delivery into neurological tissue is challenging because of the low tissue permeability. Ultrasound incorporating microbubbles has been applied to enhance drug delivery into these tissues, but the effects of a streaming flow by microbubble oscillation on drug perfusion have not been elucidated. In order to clarify the physical effects of steady streaming on drug delivery, an experimental study on dye perfusion into a tissue model was performed using microbubbles excited by acoustic waves. The surface concentration and penetration length of the drug were increased by 12% and 13%, respectively, with streaming flow. The mass of dye perfused into a tissue phantom for 30s was increased by about 20% in the phantom with oscillating bubbles. A computational model that considers fluid structure interaction for streaming flow fields induced by oscillating bubbles was developed, and mass transfer of the drug into the porous tissue model was analyzed. The computed flow fields agreed with the theoretical solutions, and the dye concentration distribution in the tissue agreed well with the experimental data. The computational results showed that steady streaming with a streaming velocity of a few millimeters per second promotes mass transfer into a tissue. © 2013 Published by Elsevier Ltd.

  5. Radiation streaming in power reactors. [PWR; BWR

    Energy Technology Data Exchange (ETDEWEB)

    Lahti, G.P.; Lee, R.R.; Courtney, J.C. (eds.)

    1979-02-01

    Separate abstracts are included for each of the 14 papers given at a special session on Radiation Streaming in Power Reactors held on November 15 at the American Nuclear Society 1978 Winter Meeting in Washington, D.C. The papers describe the methods of calculation, the engineering of shields, and the measurement of radiation environments within the containments of light water power reactors. Comparisons of measured and calculated data are used to determine the accuracy of computer predictions of the radiation environment. Specific computational and measurement techniques are described and evaluated. Emphasis is on radiation streaming in the annular region between the reactor vesel and the primary shield and its resultant environment within the primary containment.

  6. Computer programs for the acquisition and analysis of eddy-current array probe data

    International Nuclear Information System (INIS)

    Pate, J.R.; Dodd, C.V.

    1996-07-01

    Objective of the Improved Eddy-Curent ISI (in-service inspection) for Steam Generators Tubing program is to upgrade and validate eddy-current inspections, including probes, instrumentation, and data processing techniques for ISI of new, used, and repaired steam generator tubes; to improve defect detection, classification and characterization as affected by diameter and thickness variations, denting, probe wobble, tube sheet, tube supports, copper and sludge deposits, even when defect types and other variables occur in combination; to transfer this advanced technology to NRC's mobile NDE laboratory and staff. This report documents computer programs that were developed for acquisition of eddy-current data from specially designed 16-coil array probes. Complete code as well as instructions for use are provided

  7. realfast: Real-time, Commensal Fast Transient Surveys with the Very Large Array

    Science.gov (United States)

    Law, C. J.; Bower, G. C.; Burke-Spolaor, S.; Butler, B. J.; Demorest, P.; Halle, A.; Khudikyan, S.; Lazio, T. J. W.; Pokorny, M.; Robnett, J.; Rupen, M. P.

    2018-05-01

    Radio interferometers have the ability to precisely localize and better characterize the properties of sources. This ability is having a powerful impact on the study of fast radio transients, where a few milliseconds of data is enough to pinpoint a source at cosmological distances. However, recording interferometric data at millisecond cadence produces a terabyte-per-hour data stream that strains networks, computing systems, and archives. This challenge mirrors that of other domains of science, where the science scope is limited by the computational architecture as much as the physical processes at play. Here, we present a solution to this problem in the context of radio transients: realfast, a commensal, fast transient search system at the Jansky Very Large Array. realfast uses a novel architecture to distribute fast-sampled interferometric data to a 32-node, 64-GPU cluster for real-time imaging and transient detection. By detecting transients in situ, we can trigger the recording of data for those rare, brief instants when the event occurs and reduce the recorded data volume by a factor of 1000. This makes it possible to commensally search a data stream that would otherwise be impossible to record. This system will search for millisecond transients in more than 1000 hr of data per year, potentially localizing several Fast Radio Bursts, pulsars, and other sources of impulsive radio emission. We describe the science scope for realfast, the system design, expected outcomes, and ways in which real-time analysis can help in other fields of astrophysics.

  8. Delivering Instruction via Streaming Media: A Higher Education Perspective.

    Science.gov (United States)

    Mortensen, Mark; Schlieve, Paul; Young, Jon

    2000-01-01

    Describes streaming media, an audio/video presentation that is delivered across a network so that it is viewed while being downloaded onto the user's computer, including a continuous stream of video that can be pre-recorded or live. Discusses its use for nontraditional students in higher education and reports on implementation experiences. (LRW)

  9. Brain Computer Interface Learning for Systems Based on Electrocorticography and Intracortical Microelectrode Arrays

    Directory of Open Access Journals (Sweden)

    Shivayogi V Hiremath

    2015-06-01

    Full Text Available A brain-computer interface (BCI system transforms neural activity into control signals for external devices in real time. A BCI user needs to learn to generate specific cortical activity patterns to control external devices effectively. We call this process BCI learning, and it often requires significant effort and time. Therefore, it is important to study this process and develop novel and efficient approaches to accelerate BCI learning. This article reviews major approaches that have been used for BCI learning, including computer-assisted learning, co-adaptive learning, operant conditioning, and sensory feedback. We focus on BCIs based on electrocorticography and intracortical microelectrode arrays for restoring motor function. This article also explores the possibility of brain modulation techniques in promoting BCI learning, such as electrical cortical stimulation, transcranial magnetic stimulation, and optogenetics. Furthermore, as proposed by recent BCI studies, we suggest that BCI learning is in many ways analogous to motor and cognitive skill learning, and therefore skill learning should be a useful metaphor to model BCI learning.

  10. Computationally efficient optimisation algorithms for WECs arrays

    DEFF Research Database (Denmark)

    Ferri, Francesco

    2017-01-01

    In this paper two derivative-free global optimization algorithms are applied for the maximisation of the energy absorbed by wave energy converter (WEC) arrays. Wave energy is a large and mostly untapped source of energy that could have a key role in the future energy mix. The collection of this r...

  11. VFLOW2D - A Vorte-Based Code for Computing Flow Over Elastically Supported Tubes and Tube Arrays

    Energy Technology Data Exchange (ETDEWEB)

    WOLFE,WALTER P.; STRICKLAND,JAMES H.; HOMICZ,GREGORY F.; GOSSLER,ALBERT A.

    2000-10-11

    A numerical flow model is developed to simulate two-dimensional fluid flow past immersed, elastically supported tube arrays. This work is motivated by the objective of predicting forces and motion associated with both deep-water drilling and production risers in the oil industry. This work has other engineering applications including simulation of flow past tubular heat exchangers or submarine-towed sensor arrays and the flow about parachute ribbons. In the present work, a vortex method is used for solving the unsteady flow field. This method demonstrates inherent advantages over more conventional grid-based computational fluid dynamics. The vortex method is non-iterative, does not require artificial viscosity for stability, displays minimal numerical diffusion, can easily treat moving boundaries, and allows a greatly reduced computational domain since vorticity occupies only a small fraction of the fluid volume. A gridless approach is used in the flow sufficiently distant from surfaces. A Lagrangian remap scheme is used near surfaces to calculate diffusion and convection of vorticity. A fast multipole technique is utilized for efficient calculation of velocity from the vorticity field. The ability of the method to correctly predict lift and drag forces on simple stationary geometries over a broad range of Reynolds numbers is presented.

  12. Detectability of weakly interacting massive particles in the Sagittarius dwarf tidal stream

    International Nuclear Information System (INIS)

    Freese, Katherine; Gondolo, Paolo; Newberg, Heidi Jo

    2005-01-01

    Tidal streams of the Sagittarius dwarf spheroidal galaxy (Sgr) may be showering dark matter onto the solar system and contributing ∼(0.3-23)% of the local density of our galactic halo. If the Sagittarius galaxy contains dark matter in the form of weakly interacting massive particles (WIMPs), the extra contribution from the stream gives rise to a steplike feature in the energy recoil spectrum in direct dark matter detection. For our best estimate of stream velocity (300 km/s) and direction (the plane containing the Sgr dwarf and its debris), the count rate is maximum on June 28 and minimum on December 27 (for most recoil energies), and the location of the step oscillates yearly with a phase opposite to that of the count rate. In the CDMS experiment, for 60 GeV WIMPs, the location of the step oscillates between 35 and 42 keV, and for the most favorable stream density, the stream should be detectable at the 11σ level in four years of data with 10 keV energy bins. Planned large detectors like XENON, CryoArray, and the directional detector DRIFT may also be able to identify the Sgr stream

  13. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    Directory of Open Access Journals (Sweden)

    Chia-Chang Hu

    2005-04-01

    Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, 𝒪(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of 𝒪((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  14. Toward Design Guidelines for Stream Restoration Structures: Measuring and Modeling Unsteady Turbulent Flows in Natural Streams with Complex Hydraulic Structures

    Science.gov (United States)

    Lightbody, A.; Sotiropoulos, F.; Kang, S.; Diplas, P.

    2009-12-01

    Despite their widespread application to prevent lateral river migration, stabilize banks, and promote aquatic habitat, shallow transverse flow training structures such as rock vanes and stream barbs lack quantitative design guidelines. Due to the lack of fundamental knowledge about the interaction of the flow field with the sediment bed, existing engineering standards are typically based on various subjective criteria or on cross-sectionally-averaged shear stresses rather than local values. Here, we examine the performance and stability of in-stream structures within a field-scale single-threaded sand-bed meandering stream channel in the newly developed Outdoor StreamLab (OSL) at the St. Anthony Falls Laboratory (SAFL). Before and after the installation of a rock vane along the outer bank of the middle meander bend, high-resolution topography data were obtained for the entire 50-m-long reach at 1-cm spatial scale in the horizontal and sub-millimeter spatial scale in the vertical. In addition, detailed measurements of flow and turbulence were obtained using acoustic Doppler velocimetry at twelve cross-sections focused on the vicinity of the structure. Measurements were repeated at a range of extreme events, including in-bank flows with an approximate flow rate of 44 L/s (1.4 cfs) and bankfull floods with an approximate flow rate of 280 L/s (10 cfs). Under both flow rates, the structure reduced near-bank shear stresses and resulted in both a deeper thalweg and near-bank aggradation. The resulting comprehensive dataset has been used to validate a large eddy simulation carried out by SAFL’s computational fluid dynamics model, the Virtual StreamLab (VSL). This versatile computational framework is able to efficiently simulate 3D unsteady turbulent flows in natural streams with complex in-stream structures and as a result holds promise for the development of much-needed quantitative design guidelines.

  15. ISS Solar Array Management

    Science.gov (United States)

    Williams, James P.; Martin, Keith D.; Thomas, Justin R.; Caro, Samuel

    2010-01-01

    The International Space Station (ISS) Solar Array Management (SAM) software toolset provides the capabilities necessary to operate a spacecraft with complex solar array constraints. It monitors spacecraft telemetry and provides interpretations of solar array constraint data in an intuitive manner. The toolset provides extensive situational awareness to ensure mission success by analyzing power generation needs, array motion constraints, and structural loading situations. The software suite consists of several components including samCS (constraint set selector), samShadyTimers (array shadowing timers), samWin (visualization GUI), samLock (array motion constraint computation), and samJet (attitude control system configuration selector). It provides high availability and uptime for extended and continuous mission support. It is able to support two-degrees-of-freedom (DOF) array positioning and supports up to ten simultaneous constraints with intuitive 1D and 2D decision support visualizations of constraint data. Display synchronization is enabled across a networked control center and multiple methods for constraint data interpolation are supported. Use of this software toolset increases flight safety, reduces mission support effort, optimizes solar array operation for achieving mission goals, and has run for weeks at a time without issues. The SAM toolset is currently used in ISS real-time mission operations.

  16. A Design of Experiments Investigation of Offset Streams for Supersonic Jet Noise Reduction

    Science.gov (United States)

    Henderson, Brenda; Papamoschou, Dimitri

    2014-01-01

    An experimental investigation into the noise characteristics of a dual-stream jet with four airfoils inserted in the fan nozzle was conducted. The intent of the airfoils was to deflect the fan stream relative to the core stream and, therefore, impact the development of the secondary potential core and noise radiated in the peak jet-noise direction. The experiments used a full-factorial Design of Experiments (DoE) approach to identify parameters and parameter interactions impacting noise radiation at two azimuthal microphone array locations, one of which represented a sideline viewing angle. The parameters studied included airfoil angle-of-attack, airfoil azimuthal location within the fan nozzle, and airfoil axial location relative to the fan-nozzle trailing edge. Jet conditions included subsonic and supersonic fan-stream Mach numbers. Heated jets conditions were simulated with a mixture of helium and air to replicate the exhaust velocity and density of the hot jets. The introduction of the airfoils was shown to impact noise radiated at polar angles in peak-jet noise direction and to have no impact on noise radiated at small and broadside polar angles and to have no impact on broadband-shock-associated noise. The DoE analysis showed the main effects impacting noise radiation at sideline-azimuthal-viewing angles included airfoil azimuthal angle for the airfoils on the lower side of the jet near the sideline array and airfoil trailing edge distance (with airfoils located at the nozzle trailing edge produced the lowest sound pressure levels). For an array located directly beneath the jet (and on the side of the jet from which the fan stream was deflected), the main effects impacting noise radiation included airfoil angle-of-attack and airfoil azimuthal angle for the airfoils located on the observation side of the jet as well and trailing edge distance. Interaction terms between multiple configuration parameters were shown to have significant impact on the radiated

  17. A Lightweight Protocol for Secure Video Streaming.

    Science.gov (United States)

    Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis

    2018-05-14

    The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.

  18. FACT. Streamed data analysis and online application of machine learning models

    Energy Technology Data Exchange (ETDEWEB)

    Bruegge, Kai Arno; Buss, Jens [Technische Universitaet Dortmund (Germany). Astroteilchenphysik; Collaboration: FACT-Collaboration

    2016-07-01

    Imaging Atmospheric Cherenkov Telescopes (IACTs) like FACT produce a continuous flow of data during measurements. Analyzing the data in near real time is essential for monitoring sources. One major task of a monitoring system is to detect changes in the gamma-ray flux of a source, and to alert other experiments if some predefined limit is reached. In order to calculate the flux of an observed source, it is necessary to run an entire data analysis process including calibration, image cleaning, parameterization, signal-background separation and flux estimation. Software built on top of a data streaming framework has been implemented for FACT and generalized to work with the data acquisition framework of the Cherenkov Telescope Array (CTA). We present how the streams-framework is used to apply supervised machine learning models to an online data stream from the telescope.

  19. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    Science.gov (United States)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  20. A stream cipher based on a spatiotemporal chaotic system

    International Nuclear Information System (INIS)

    Li Ping; Li Zhong; Halang, Wolfgang A.; Chen Guanrong

    2007-01-01

    A stream cipher based on a spatiotemporal chaotic system is proposed. A one-way coupled map lattice consisting of logistic maps is served as the spatiotemporal chaotic system. Multiple keystreams are generated from the coupled map lattice by using simple algebraic computations, and then are used to encrypt plaintext via bitwise XOR. These make the cipher rather simple and efficient. Numerical investigation shows that the cryptographic properties of the generated keystream are satisfactory. The cipher seems to have higher security, higher efficiency and lower computation expense than the stream cipher based on a spatiotemporal chaotic system proposed recently

  1. An efficient method for evaluating RRAM crossbar array performance

    Science.gov (United States)

    Song, Lin; Zhang, Jinyu; Chen, An; Wu, Huaqiang; Qian, He; Yu, Zhiping

    2016-06-01

    An efficient method is proposed in this paper to mitigate computational burden in resistive random access memory (RRAM) array simulation. In the worst case scenario, a 4 Mb RRAM array with line resistance is greatly reduced using this method. For 1S1R-RRAM array structures, static and statistical parameters in both reading and writing processes are simulated. Error analysis is performed to prove the reliability of the algorithm when line resistance is extremely small compared with the junction resistance. Results show that high precision is maintained even if the size of RRAM array is reduced by one thousand times, which indicates significant improvements in both computational efficiency and memory requirements.

  2. Streams with Strahler Stream Order

    Data.gov (United States)

    Minnesota Department of Natural Resources — Stream segments with Strahler stream order values assigned. As of 01/08/08 the linework is from the DNR24K stream coverages and will not match the updated...

  3. Development of a cross-section based stream package for MODFLOW

    Science.gov (United States)

    Ou, G.; Chen, X.; Irmak, A.

    2012-12-01

    Accurate simulation of stream-aquifer interactions for wide rivers using the streamflow routing package in MODFLOW is very challenging. To better represent a wide river spanning over multiple model grid cells, a Cross-Section based streamflow Routing (CSR) package is developed and incorporated into MODFLOW to simulate the interaction between streams and aquifers. In the CSR package, a stream segment is represented as a four-point polygon instead of a polyline which is traditionally used in streamflow routing simulation. Each stream segment is composed of upstream and downstream cross-sections. A cross-section consists of a number of streambed points possessing coordinates, streambed thicknesses and streambed hydraulic conductivities to describe the streambed geometry and hydraulic properties. The left and right end points are used to determine the locations of the stream segments. According to the cross-section geometry and hydraulic properties, CSR calculates the new stream stage at the cross-section using the Brent's method to solve the Manning's Equation. A module is developed to automatically compute the area of the stream segment polygon on each intersected MODFLOW grid cell as the upstream and downstream stages change. The stream stage and streambed hydraulic properties of model grids are interpolated based on the streambed points. Streambed leakage is computed as a function of streambed conductance and difference between the groundwater level and stream stage. The Muskingum-Cunge flow routing scheme with variable parameters is used to simulate the streamflow as the groundwater (discharge or recharge) contributes as lateral flows. An example is used to illustrate the capabilities of the CSR package. The result shows that the CSR is applicable to describing the spatial and temporal variation in the interaction between streams and aquifers. The input data become simple due to that the internal program automatically interpolates the cross-section data to each

  4. Adaptive motion compensation in sonar array processing

    NARCIS (Netherlands)

    Groen, J.

    2006-01-01

    In recent years, sonar performance has mainly improved via a significant increase in array ap-erture, signal bandwidth and computational power. This thesis aims at improving sonar array processing techniques based on these three steps forward. In applications such as anti-submarine warfare and mine

  5. CCD and IR array controllers

    Science.gov (United States)

    Leach, Robert W.; Low, Frank J.

    2000-08-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation at low background/low noise regimes as well as high background/high speed regimes. The CCD and IR controllers have a common digital core based on user- programmable digital signal processors that are used to generate the array clocking and signal processing signals customized for each application. A fiber optic link passes image data and commands to VME or PCI interface boards resident in a host computer to the controller. CCD signal processing is done with a dual slope integrator operating at speeds of up to one Megapixel per second per channel. Signal processing of IR arrays is done either with a dual channel video processor or a four channel video processor that has built-in image memory and a coadder to 32-bit precision for operating high background arrays. Recent developments underway include the implementation of a fast fiber optic data link operating at a speed of 12.5 Megapixels per second for fast image transfer from the controller to the host computer, and supporting image acquisition software and device drivers for the PCI interface board for the Sun Solaris, Linux and Windows 2000 operating systems.

  6. Flow Field and Acoustic Predictions for Three-Stream Jets

    Science.gov (United States)

    Simmons, Shaun Patrick; Henderson, Brenda S.; Khavaran, Abbas

    2014-01-01

    Computational fluid dynamics was used to analyze a three-stream nozzle parametric design space. The study varied bypass-to-core area ratio, tertiary-to-core area ratio and jet operating conditions. The flowfield solutions from the Reynolds-Averaged Navier-Stokes (RANS) code Overflow 2.2e were used to pre-screen experimental models for a future test in the Aero-Acoustic Propulsion Laboratory (AAPL) at the NASA Glenn Research Center (GRC). Flowfield solutions were considered in conjunction with the jet-noise-prediction code JeNo to screen the design concepts. A two-stream versus three-stream computation based on equal mass flow rates showed a reduction in peak turbulent kinetic energy (TKE) for the three-stream jet relative to that for the two-stream jet which resulted in reduced acoustic emission. Additional three-stream solutions were analyzed for salient flowfield features expected to impact farfield noise. As tertiary power settings were increased there was a corresponding near nozzle increase in shear rate that resulted in an increase in high frequency noise and a reduction in peak TKE. As tertiary-to-core area ratio was increased the tertiary potential core elongated and the peak TKE was reduced. The most noticeable change occurred as secondary-to-core area ratio was increased thickening the secondary potential core, elongating the primary potential core and reducing peak TKE. As forward flight Mach number was increased the jet plume region decreased and reduced peak TKE.

  7. Characterization of the electrical output of flat-plate photovoltaic arrays

    Science.gov (United States)

    Gonzalez, C. C.; Hill, G. M.; Ross, R. G., Jr.

    1982-01-01

    The electric output of flat-plate photovoltaic arrays changes constantly, due primarily to changes in cell temperature and irradiance level. As a result, array loads such as direct-current to alternating-current power conditioners must be able to accommodate widely varying input levels, while maintaining operation at or near the array maximum power point.The results of an extensive computer simulation study that was used to define the parameters necessary for the systematic design of array/power-conditioner interfaces are presented as normalized ratios of power-conditioner parameters to array parameters, to make the results universally applicable to a wide variety of system sizes, sites, and operating modes. The advantages of maximum power tracking and a technique for computing average annual power-conditioner efficiency are discussed.

  8. GPGPU COMPUTING

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2012-05-01

    Full Text Available Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM and the new framwork, OpenCL that tries to unify the GPGPU computing models.

  9. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    Science.gov (United States)

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  10. Advanced computational multi-fluid dynamics: a new model for understanding electrokinetic phenomena in porous media

    Science.gov (United States)

    Gulamali, M. Y.; Saunders, J. H.; Jackson, M. D.; Pain, C. C.

    2009-04-01

    We present results from a new computational multi-fluid dynamics code, designed to model the transport of heat, mass and chemical species during flow of single or multiple immiscible fluid phases through porous media, including gravitational effects and compressibility. The model also captures the electrical phenomena which may arise through electrokinetic, electrochemical and electrothermal coupling. Building on the advanced computational technology of the Imperial College Ocean Model, this new development leads the way towards a complex multiphase code using arbitrary unstructured and adaptive meshes, and domains decomposed to run in parallel over a cluster of workstations or a dedicated parallel computer. These facilities will allow efficient and accurate modelling of multiphase flows which capture large- and small-scale transport phenomena, while preserving the important geology and/or surface topology to make the results physically meaningful and realistic. Applications include modelling of contaminant transport in aquifers, multiphase flow during hydrocarbon production, migration of carbon dioxide during sequestration, and evaluation of the design and safety of nuclear reactors. Simulations of the streaming potential resulting from multiphase flow in laboratory- and field-scale models demonstrate that streaming potential signals originate at fluid fronts, and at geologic boundaries where fluid saturation changes. This suggests that downhole measurements of streaming potential may be used to inform production strategies in oil and gas reservoirs. As water encroaches on an oil production well, the streaming-potential signal associated with the water front encompasses the well even when the front is up to 100 m away, so the potential measured at the well starts to change significantly relative to a distant reference electrode. Variations in the geometry of the encroaching water front could be characterized using an array of electrodes positioned along the well

  11. METALLICITY AND AGE OF THE STELLAR STREAM AROUND THE DISK GALAXY NGC 5907

    Energy Technology Data Exchange (ETDEWEB)

    Laine, Seppo; Grillmair, Carl J.; Capak, Peter [Spitzer Science Center-Caltech, MS 314-6, Pasadena, CA 91125 (United States); Arendt, Richard G. [CRESST/UMBC/NASA GSFC, Code 665, Greenbelt, MD 20771 (United States); Romanowsky, Aaron J. [Department of Physics and Astronomy, San José State University, One Washington Square, San Jose, CA 95192 (United States); Martínez-Delgado, David [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Ashby, Matthew L. N. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Davies, James E. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Majewski, Stephen R. [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904-4325 (United States); Brodie, Jean P.; Arnold, Jacob A. [University of California Observatories and Department of Astronomy and Astrophysics, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); GaBany, R. Jay, E-mail: seppo@ipac.caltech.edu [Black Bird Observatory, 5660 Brionne Drive, San Jose, CA 95118 (United States)

    2016-09-01

    Stellar streams have become central to studies of the interaction histories of nearby galaxies. To characterize the most prominent parts of the stellar stream around the well-known nearby ( d  = 17 Mpc) edge-on disk galaxy NGC 5907, we have obtained and analyzed new, deep gri Subaru/Suprime-Cam and 3.6 μ m Spitzer /Infrared Array Camera observations. Combining the near-infrared 3.6 μ m data with visible-light images allows us to use a long wavelength baseline to estimate the metallicity and age of the stellar population along an ∼60 kpc long segment of the stream. We have fitted the stellar spectral energy distribution with a single-burst stellar population synthesis model and we use it to distinguish between the proposed satellite accretion and minor/major merger formation models of the stellar stream around this galaxy. We conclude that a massive minor merger (stellar mass ratio of at least 1:8) can best account for the metallicity of −0.3 inferred along the brightest parts of the stream.

  12. Hydroelectric plant turbine, stream and spillway flow measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lampa, J.; Lemon, D.; Buermans, J. [ASL AQ Flow Inc., Sidney, BC (Canada)

    2004-07-01

    This presentation provided schematics of the turbine flow measurements and typical bulb installations at the Kootenay Canal and Wells hydroelectric power facilities in British Columbia. A typical arrangement for measuring stream flow using acoustic scintillation was also illustrated. Acoustic scintillation is portable, non-intrusive, suitable for short intakes, requires minimal maintenance and is cost effective and accurate. A comparison between current meters and acoustic scintillation was also presented. Stream flow measurement is valuable in evaluating downstream areas that are environmentally important for fish habitat. Stream flow measurement makes it possible to define circulation. The effects of any changes can be assessed by combining field measurements and numerical modelling. The presentation also demonstrated that computational fluid dynamics modelling appears promising in determining stream flow and turbulent flow at spillways. tabs., figs.

  13. Microprocessor system to recover data from a self-scanning photodiode array

    International Nuclear Information System (INIS)

    Koppel, L.N.; Gadd, T.J.

    1975-01-01

    A microprocessor system developed at Lawrence Livermore Laboratory has expedited the recovery of data describing the low energy x-ray spectra radiated by laser-fusion targets. An Intel microprocessor controls the digitization and scanning of the data stream of an x-ray-sensitive self-scanning photodiode array incorporated in a crystal diffraction spectrometer

  14. Parallel computing and networking; Heiretsu keisanki to network

    Energy Technology Data Exchange (ETDEWEB)

    Asakawa, E; Tsuru, T [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T [Japan Petroleum Exploration Co. Ltd., Tokyo (Japan)

    1996-05-01

    This paper describes the trend of parallel computers used in geophysical exploration. Around 1993 was the early days when the parallel computers began to be used for geophysical exploration. Classification of these computers those days was mainly MIMD (multiple instruction stream, multiple data stream), SIMD (single instruction stream, multiple data stream) and the like. Parallel computers were publicized in the 1994 meeting of the Geophysical Exploration Society as a `high precision imaging technology`. Concerning the library of parallel computers, there was a shift to PVM (parallel virtual machine) in 1993 and to MPI (message passing interface) in 1995. In addition, the compiler of FORTRAN90 was released with support implemented for data parallel and vector computers. In 1993, networks used were Ethernet, FDDI, CDDI and HIPPI. In 1995, the OC-3 products under ATM began to propagate. However, ATM remains to be an interoffice high speed network because the ATM service has not spread yet for the public network. 1 ref.

  15. Cytosolic streaming in vegetative mycelium and aerial structures of Aspergillus niger.

    Science.gov (United States)

    Bleichrodt, R; Vinck, A; Krijgsheld, P; van Leeuwen, M R; Dijksterhuis, J; Wösten, H A B

    2013-03-15

    Aspergillus niger forms aerial hyphae and conidiophores after a period of vegetative growth. The hyphae within the mycelium of A. niger are divided by septa. The central pore in these septa allows for cytoplasmic streaming. Here, we studied inter- and intra-compartmental streaming of the reporter protein GFP in A. niger. Expression of the gene encoding nuclear targeted GFP from the gpdA or glaA promoter resulted in strong fluorescence of nuclei within the vegetative hyphae and weak fluorescence in nuclei within the aerial structures. These data and nuclear run on experiments showed that gpdA and glaA are higher expressed in the vegetative mycelium when compared to aerial hyphae, conidiophores and conidia. Notably, gpdA or glaA driven expression of the gene encoding cytosolic GFP resulted in strongly fluorescent vegetative hyphae and aerial structures. Apparently, GFP streams from vegetative hyphae into aerial structures. This was confirmed by monitoring fluorescence of photo-activatable GFP (PA-GFP). In contrast, PA-GFP did not stream from aerial structures to vegetative hyphae. Streaming of PA-GFP within vegetative hyphae or within aerial structures of A. niger occurred at a rate of 10-15 μm s(-1). Taken together, these results not only show that GFP streams from the vegetative mycelium to aerial structures but it also indicates that its encoding RNA is not streaming. Absence of RNA streaming would explain why distinct RNA profiles were found in aerial structures and the vegetative mycelium by nuclear run on analysis and micro-array analysis.

  16. The Integration of Environmental Constraints into Tidal Array Optimisation

    Science.gov (United States)

    du Feu, Roan; de Trafford, Sebastian; Culley, Dave; Hill, Jon; Funke, Simon W.; Kramer, Stephan C.; Piggott, Matthew D.

    2015-04-01

    It has been estimated by The Carbon Trust that the marine renewable energy sector, of which tidal stream turbines are projected to play a large part, could produce 20% of the UK's present electricity requirements. This has lead to the important question of how this technology can be deployed in an economically and environmentally friendly manner. Work is currently under way to understand how the tidal turbines that constitute an array can be arranged to maximise the total power generated by that array. The work presented here continues this through the inclusion of environmental constraints. The benefits of the renewable energy sector to our environment at large are not in question. However, the question remains as to the effects this burgeoning sector will have on local environments, and how to mitigate these effects if they are detrimental. For example, the presence of tidal arrays can, through altering current velocity, drastically change the sediment transport into and out of an area along with re-suspending existing sediment. This can have the effects of scouring or submerging habitat, mobilising contaminants within the existing sediment, reducing food supply and altering the turbidity of the water. All of which greatly impact upon any fauna in the affected region. This work pays particular attention to the destruction of habitat of benthic fauna, as this is quantifiable as a direct result of change in the current speed; a primary factor in determining sediment accumulation on the sea floor. OpenTidalFarm is an open source tool that maximises the power generated by an array through repositioning the turbines within it. It currently uses a 2D shallow water model with turbines represented as bump functions of increased friction. The functional of interest, power extracted by the array, is evaluated from the flow field which is calculated at each iteration using a finite element method. A gradient-based local optimisation is then used through solving the

  17. Cone-beam computed tomography in children with cochlear implants: The effect of electrode array position on ECAP.

    Science.gov (United States)

    Lathuillière, Marine; Merklen, Fanny; Piron, Jean-Pierre; Sicard, Marielle; Villemus, Françoise; Menjot de Champfleur, Nicolas; Venail, Frédéric; Uziel, Alain; Mondain, Michel

    2017-01-01

    To assess the feasibility of using cone-beam computed tomography (CBCT) in young children with cochlear implants (CIs) and study the effect of intracochlear position on electrophysiological and behavioral measurements. A total of 40 children with either unilateral or bilateral cochlear implants were prospectively included in the study. Electrode placement and insertion angles were studied in 55 Cochlear ® implants (16 straight arrays and 39 perimodiolar arrays), using either CBCT or X-ray imaging. CBCT or X-ray imaging were scheduled when the children were leaving the recovery room. We recorded intraoperative and postoperative neural response telemetry threshold (T-NRT) values, intraoperative and postoperative electrode impedance values, as well as behavioral T (threshold) and C (comfort) levels on electrodes 1, 5, 10, 15 and 20. CBCT imaging was feasible without any sedation in 24 children (60%). Accidental scala vestibuli insertion was observed in 3 out of 24 implants as assessed by CBCT. The mean insertion angle was 339.7°±35.8°. The use of a perimodiolar array led to higher angles of insertion, lower postoperative T-NRT, as well as decreased behavioral T and C levels. We found no significant effect of either electrode array position or angle of insertion on electrophysiological data. CBCT appears to be a reliable tool for anatomical assessment of young children with CIs. Intracochlear position had no significant effect on the electrically evoked compound action potential (ECAP) threshold. Our CBCT protocol must be improved to increase the rate of successful investigations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. X-ray focusing using capillary arrays

    International Nuclear Information System (INIS)

    Nugent, K.A.; Chapman, H.N.

    1990-01-01

    A new form of X-ray focusing device based on glass capillary arrays is presented. Theoretical and experimental results for array of circular capillaries and theoretical and computational results for square hole capillaries are given. It is envisaged that devices such as these will find wide applications in X-ray optics as achromatic condensers and collimators. 3 refs., 4 figs

  19. Alignment data streams for the ATLAS inner detector

    International Nuclear Information System (INIS)

    Pinto, B; Amorim, A; Pereira, P; Elsing, M; Hawkings, R; Schieck, J; Garcia, S; Schaffer, A; Ma, H; Anjos, A

    2008-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to reduce the Event Filter rate output, down to a level that allows the storage and processing of these data. These concepts are described in the ATLAS Computing Model which embraces Grid paradigm. The output coming from the Event Filter consists of four main streams: physical stream, express stream, calibration stream, and diagnostic stream. The calibration stream will be transferred to the Tier-0 facilities that will provide the prompt reconstruction of this stream with a minimum latency of 8 hours, producing calibration constants of sufficient quality to allow a first-pass processing. The Inner Detector community is developing and testing an independent common calibration stream selected at the Event Filter after track reconstruction. It is composed of raw data, in byte-stream format, contained in Readout Buffers (ROBs) with hit information of the selected tracks, and it will be used to derive and update a set of calibration and alignment constants. This option was selected because it makes use of the Byte Stream Converter infrastructure and possibly gives better bandwidth usage and storage optimization. Processing is done using specialized algorithms running in the Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. This work is addressing in particular the alignment requirements, the needs for track and hit selection, and the performance issues

  20. Alignment data stream for the ATLAS inner detector

    International Nuclear Information System (INIS)

    Pinto, B

    2010-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to achieve the necessary Event Filter rate output, making possible to optimize the storage and processing needs of these data. These needs are described in the ATLAS Computing Model, which embraces Grid concepts. The output coming from the Event Filter will consist of three main streams: a primary stream, the express stream and the calibration stream. The calibration stream will be transferred to the Tier-0 facilities which will allow the prompt reconstruction of this stream with an admissible latency of 8 hours, producing calibration constants of sufficient quality to permit a first-pass processing. An independent calibration stream is developed and tested, which selects tracks at the level-2 trigger (LVL2) after the reconstruction. The stream is composed of raw data, in byte-stream format, and contains only information of the relevant parts of the detector, in particular the hit information of the selected tracks. This leads to a significantly improved bandwidth usage and storage capability. The stream will be used to derive and update the calibration and alignment constants if necessary every 24h. Processing is done using specialized algorithms running in Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. The work is addressing in particular the alignment requirements, the needs for track and hit selection, timing and bandwidth issues.

  1. The Cherenkov Telescope Array production system for Monte Carlo simulations and analysis

    Science.gov (United States)

    Arrabito, L.; Bernloehr, K.; Bregeon, J.; Cumani, P.; Hassan, T.; Haupt, A.; Maier, G.; Moralejo, A.; Neyroud, N.; pre="for the"> CTA Consortium, DIRAC Consortium,

    2017-10-01

    The Cherenkov Telescope Array (CTA), an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale, is the next-generation instrument in the field of very high energy gamma-ray astronomy. An average data stream of about 0.9 GB/s for about 1300 hours of observation per year is expected, therefore resulting in 4 PB of raw data per year and a total of 27 PB/year, including archive and data processing. The start of CTA operation is foreseen in 2018 and it will last about 30 years. The installation of the first telescopes in the two selected locations (Paranal, Chile and La Palma, Spain) will start in 2017. In order to select the best site candidate to host CTA telescopes (in the Northern and in the Southern hemispheres), massive Monte Carlo simulations have been performed since 2012. Once the two sites have been selected, we have started new Monte Carlo simulations to determine the optimal array layout with respect to the obtained sensitivity. Taking into account that CTA may be finally composed of 7 different telescope types coming in 3 different sizes, many different combinations of telescope position and multiplicity as a function of the telescope type have been proposed. This last Monte Carlo campaign represented a huge computational effort, since several hundreds of telescope positions have been simulated, while for future instrument response function simulations, only the operating telescopes will be considered. In particular, during the last 18 months, about 2 PB of Monte Carlo data have been produced and processed with different analysis chains, with a corresponding overall CPU consumption of about 125 M HS06 hours. In these proceedings, we describe the employed computing model, based on the use of grid resources, as well as the production system setup, which relies on the DIRAC interware. Finally, we present the envisaged evolutions of the CTA production system for the off-line data processing during CTA operations and

  2. Effect of wire shape on wire array discharge

    International Nuclear Information System (INIS)

    Shimomura, N.; Tanaka, Y.; Yushita, Y.; Nagata, M.; Teramoto, Y.; Katsuki, S.; Akiyama, H.

    2001-01-01

    Although considerable investigations have been reported on z-pinches to achieve nuclear fusion, little attention has been given from the point of view of how a wire array consisting of many parallel wires explodes. Instability existing in the wire array discharge has been shown. In this paper, the effect of wire shape in the wire array on unstable behavior of the wire array discharge is represented by numerical analysis. The claws on the wire formed in installation of wire may cause uniform current distribution on wire array. The effect of error of wire diameter in production is computed by Monte Carlo Method. (author)

  3. Effect of wire shape on wire array discharge

    Energy Technology Data Exchange (ETDEWEB)

    Shimomura, N.; Tanaka, Y.; Yushita, Y.; Nagata, M. [University of Tokushima, Department of Electrical and Electronic Engineering, Tokushima (Japan); Teramoto, Y.; Katsuki, S.; Akiyama, H. [Kumamoto University, Department of Electrical and Computer Engineering, Kumamoto (Japan)

    2001-09-01

    Although considerable investigations have been reported on z-pinches to achieve nuclear fusion, little attention has been given from the point of view of how a wire array consisting of many parallel wires explodes. Instability existing in the wire array discharge has been shown. In this paper, the effect of wire shape in the wire array on unstable behavior of the wire array discharge is represented by numerical analysis. The claws on the wire formed in installation of wire may cause uniform current distribution on wire array. The effect of error of wire diameter in production is computed by Monte Carlo Method. (author)

  4. Low-flow characteristics of streams in the Puget Sound region, Washington

    Science.gov (United States)

    Hidaka, F.T.

    1973-01-01

    Periods of low streamflow are usually the most critical factor in relation to most water uses. The purpose of this report is to present data on low-flow characteristics of streams in the Puget Sound region, Washington, and to briefly explain some of the factors that influence low flow in the various basins. Presented are data on low-flow frequencies of streams in the Puget Sound region, as gathered at 150 gaging stations. Four indexes were computed from the flow-flow-frequency curves and were used as a basis to compare the low-flow characteristics of the streams. The indexes are the (1) low-flow-yield index, expressed in unit runoff per square mile; (2) base-flow index, or the ratio of the median 7-day low flow to the average discharge; (3) slope index, or slope of annual 7-day low-flow-frequency curve; and (4) spacing index, or spread between the 7-day and 183-day low-flow-frequency curves. The indexes showed a wide variation between streams due to the complex interrelation between climate, topography, and geology. The largest low-flow-yield indexes determined--greater than 1.5 cfs (cubic feet per second) per square mile--were for streams that head at high altitudes in the Cascade and Olympic Mountains and have their sources at glaciers. The smallest low-flow-yield indexes--less than 0.5 cfs per square mile--were for the small streams that drain the lowlands adjacent to Puget Sound. Indexes between the two extremes were for nonglacial streams that head at fairly high altitudes in areas of abundant precipitation. The base-flow index has variations that can be attributed to a basin's hydrogeology, with very little influence from climate. The largest base-flow indexes were obtained for streams draining permeable unconsolidated glacial and alluvial sediments in parts of the lowlands adjacent to Puget Sound. Large volume of ground water in these materials sustain flows during late summer. The smallest indexes were computed for streams draining areas underlain by

  5. Flexible eddy current coil arrays

    International Nuclear Information System (INIS)

    Krampfner, Y.; Johnson, D.P.

    1987-01-01

    A novel approach was devised to overcome certain limitations of conventional eddy current testing. The typical single-element hand-wound probe was replaced with a two dimensional array of spirally wound probe elements deposited on a thin, flexible polyimide substrate. This provides full and reliable coverage of the test area and eliminates the need for scanning. The flexible substrate construction of the array allows the probes to conform to irregular part geometries, such as turbine blades and tubing, thereby eliminating the need for specialized probes for each geometry. Additionally, the batch manufacturing process of the array can yield highly uniform and reproducible coil geometries. The array is driven by a portable computer-based eddy current instrument, smartEDDY/sup TM/, capable of two-frequency operation, and offers a great deal of versatility and flexibility due to its software-based architecture. The array is coupled to the instrument via an 80-switch multiplexer that can be configured to address up to 1600 probes. The individual array elements may be addressed in any desired sequence, as defined by the software

  6. In-camera video-stream processing for bandwidth reduction in web inspection

    Science.gov (United States)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  7. Chunking of Large Multidimensional Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  8. A Method for Calculating the Mean Orbits of Meteor Streams

    Science.gov (United States)

    Voloshchuk, Yu. I.; Kashcheev, B. L.

    An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.

  9. Bearing estimation with acoustic vector-sensor arrays

    International Nuclear Information System (INIS)

    Hawkes, M.; Nehorai, A.

    1996-01-01

    We consider direction-of-arrival (DOA) estimation using arrays of acoustic vector sensors in free space, and derive expressions for the Cramacute er-Rao bound on the DOA parameters when there is a single source. The vector-sensor array is seen to have improved performance over the traditional scalar-sensor (pressure-sensor) array for two distinct reasons: its elements have an inherent directional sensitivity and the array makes a greater number of measurements. The improvement is greatest for small array apertures and low signal-to-noise ratios. Examination of the conventional beamforming and Capon DOA estimators shows that vector-sensor arrays can completely resolve the bearing, even with a linear array, and can remove the ambiguities associated with spatial undersampling. We also propose and analyze a diversely-oriented array of velocity sensors that possesses some of the advantages of the vector-sensor array without the increase in hardware and computation. In addition, in certain scenarios it can avoid problems with spatially correlated noise that the vector-sensor array may suffer. copyright 1996 American Institute of Physics

  10. Interactive real-time media streaming with reliable communication

    Science.gov (United States)

    Pan, Xunyu; Free, Kevin M.

    2014-02-01

    Streaming media is a recent technique for delivering multimedia information from a source provider to an end- user over the Internet. The major advantage of this technique is that the media player can start playing a multimedia file even before the entire file is transmitted. Most streaming media applications are currently implemented based on the client-server architecture, where a server system hosts the media file and a client system connects to this server system to download the file. Although the client-server architecture is successful in many situations, it may not be ideal to rely on such a system to provide the streaming service as users may be required to register an account using personal information in order to use the service. This is troublesome if a user wishes to watch a movie simultaneously while interacting with a friend in another part of the world over the Internet. In this paper, we describe a new real-time media streaming application implemented on a peer-to-peer (P2P) architecture in order to overcome these challenges within a mobile environment. When using the peer-to-peer architecture, streaming media is shared directly between end-users, called peers, with minimal or no reliance on a dedicated server. Based on the proposed software pɛvμa (pronounced [revma]), named for the Greek word meaning stream, we can host a media file on any computer and directly stream it to a connected partner. To accomplish this, pɛvμa utilizes the Microsoft .NET Framework and Windows Presentation Framework, which are widely available on various types of windows-compatible personal computers and mobile devices. With specially designed multi-threaded algorithms, the application can stream HD video at speeds upwards of 20 Mbps using the User Datagram Protocol (UDP). Streaming and playback are handled using synchronized threads that communicate with one another once a connection is established. Alteration of playback, such as pausing playback or tracking to a

  11. Computer-aided mapping of stream channels beneath the Lawrence Livermore National Laboratory Super Fund Site

    Energy Technology Data Exchange (ETDEWEB)

    Sick, M. [Lawrence Livermore National Lab., CA (United States)

    1994-12-01

    The Lawrence Livermore National Laboratory (LLNL) site rests upon 300-400 feet of highly heterogeneous braided stream sediments which have been contaminated by a plume of Volatile Organic Compounds (VOCs). The stream channels are filled with highly permeable coarse grained materials that provide quick avenues for contaminant transport. The plume of VOCs has migrated off site in the TFA area, making it the area of greatest concern. I mapped the paleo-stream channels in the TFA area using SLICE an LLNL Auto-CADD routine. SLICE constructed 2D cross sections and sub-horizontal views of chemical, geophysical, and lithologic data sets. I interpreted these 2D views as a braided stream environment, delineating the edges of stream channels. The interpretations were extracted from Auto-CADD and placed into Earth Vision`s 3D modeling and viewing routines. Several 3D correlations have been generated, but no model has yet been chosen as a best fit.

  12. Electrostatic mechanism of nucleosomal array folding revealed by computer simulation.

    Science.gov (United States)

    Sun, Jian; Zhang, Qing; Schlick, Tamar

    2005-06-07

    Although numerous experiments indicate that the chromatin fiber displays salt-dependent conformations, the associated molecular mechanism remains unclear. Here, we apply an irregular Discrete Surface Charge Optimization (DiSCO) model of the nucleosome with all histone tails incorporated to describe by Monte Carlo simulations salt-dependent rearrangements of a nucleosomal array with 12 nucleosomes. The ensemble of nucleosomal array conformations display salt-dependent condensation in good agreement with hydrodynamic measurements and suggest that the array adopts highly irregular 3D zig-zag conformations at high (physiological) salt concentrations and transitions into the extended "beads-on-a-string" conformation at low salt. Energy analyses indicate that the repulsion among linker DNA leads to this extended form, whereas internucleosome attraction drives the folding at high salt. The balance between these two contributions determines the salt-dependent condensation. Importantly, the internucleosome and linker DNA-nucleosome attractions require histone tails; we find that the H3 tails, in particular, are crucial for stabilizing the moderately folded fiber at physiological monovalent salt.

  13. HTGR core seismic analysis using an array processor

    International Nuclear Information System (INIS)

    Shatoff, H.; Charman, C.M.

    1983-01-01

    A Floating Point Systems array processor performs nonlinear dynamic analysis of the high-temperature gas-cooled reactor (HTGR) core with significant time and cost savings. The graphite HTGR core consists of approximately 8000 blocks of various shapes which are subject to motion and impact during a seismic event. Two-dimensional computer programs (CRUNCH2D, MCOCO) can perform explicit step-by-step dynamic analyses of up to 600 blocks for time-history motions. However, use of two-dimensional codes was limited by the large cost and run times required. Three-dimensional analysis of the entire core, or even a large part of it, had been considered totally impractical. Because of the needs of the HTGR core seismic program, a Floating Point Systems array processor was used to enhance computer performance of the two-dimensional core seismic computer programs, MCOCO and CRUNCH2D. This effort began by converting the computational algorithms used in the codes to a form which takes maximum advantage of the parallel and pipeline processors offered by the architecture of the Floating Point Systems array processor. The subsequent conversion of the vectorized FORTRAN coding to the array processor required a significant programming effort to make the system work on the General Atomic (GA) UNIVAC 1100/82 host. These efforts were quite rewarding, however, since the cost of running the codes has been reduced approximately 50-fold and the time threefold. The core seismic analysis with large two-dimensional models has now become routine and extension to three-dimensional analysis is feasible. These codes simulate the one-fifth-scale full-array HTGR core model. This paper compares the analysis with the test results for sine-sweep motion

  14. Data acquisition for experiments with multi-detector arrays

    Indian Academy of Sciences (India)

    Experiments with multi-detector arrays have special requirements and place higher demands on computer data acquisition systems. In this contribution we discuss data acquisition systems with special emphasis on multi-detector arrays and in particular we describe a new data acquisition system, AMPS which we have ...

  15. DOA Estimation of Cylindrical Conformal Array Based on Geometric Algebra

    Directory of Open Access Journals (Sweden)

    Minjie Wu

    2016-01-01

    Full Text Available Due to the variable curvature of the conformal carrier, the pattern of each element has a different direction. The traditional method of analyzing the conformal array is to use the Euler rotation angle and its matrix representation. However, it is computationally demanding especially for irregular array structures. In this paper, we present a novel algorithm by combining the geometric algebra with Multiple Signal Classification (MUSIC, termed as GA-MUSIC, to solve the direction of arrival (DOA for cylindrical conformal array. And on this basis, we derive the pattern and array manifold. Compared with the existing algorithms, our proposed one avoids the cumbersome matrix transformations and largely decreases the computational complexity. The simulation results verify the effectiveness of the proposed method.

  16. Patch holography using a double layer microphone array

    DEFF Research Database (Denmark)

    Gomes, Jesper Skovhus

    a closed local element mesh that surrounds the microphone array, and with a part of the mesh coinciding with a patch, the entire source is not needed in the model. Since the array has two layers, sources/reflections behind the array are also allowed. The Equivalent Source Method (ESM) is another technique...... in which the sound field is represented by a set of monopoles placed inside the source. In this paper these monopoles are distributed so that they surround the array, and the reconstruction is compared with the IBEM-based approach. The comparisons are based on computer simulations with a planar double...... layer array and sources with different shapes....

  17. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  18. Application of multiplicative array techniques for multibeam sounder systems

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    modification in terms of additional computation or hardware for improved array gain. The present work is devoted towards the study of a better beamforming method i.e. a multiplicative array technique with some modification proposEd. by Brown and Rowland...

  19. Prerequisites for building a computer security incident response capability

    CSIR Research Space (South Africa)

    Mooi, M

    2015-08-01

    Full Text Available . 1]. 2) Handbook for Computer Security Incident Response Teams (CSIRTs) [18] (CMU-SEI): Providing guidance on building and running a CSIRT, this handbook has a particular focus on the incident handling service [18, p. xv]. In addition, a basic CSIRT... stream_source_info Mooi_2015.pdf.txt stream_content_type text/plain stream_size 41092 Content-Encoding UTF-8 stream_name Mooi_2015.pdf.txt Content-Type text/plain; charset=UTF-8 Prerequisites for building a computer...

  20. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available stream_source_info Mabakanea_19979_2017.pdf.txt stream_content_type text/plain stream_size 33716 Content-Encoding UTF-8 stream_name Mabakanea_19979_2017.pdf.txt Content-Type text/plain; charset=UTF-8 SACJ 29(3) December... when using many processors within the compute nodes of the supercomputer. The type of the processors of compute nodes and their memory also play an important role in the overall performance of the parallel application running on a supercomputer. DL...

  1. Hardware stream cipher with controllable chaos generator for colour image encryption

    KAUST Repository

    Barakat, Mohamed L.

    2014-01-01

    This study presents hardware realisation of chaos-based stream cipher utilised for image encryption applications. A third-order chaotic system with signum non-linearity is implemented and a new post processing technique is proposed to eliminate the bias from the original chaotic sequence. The proposed stream cipher utilises the processed chaotic output to mask and diffuse input pixels through several stages of XORing and bit permutations. The performance of the cipher is tested with several input images and compared with previously reported systems showing superior security and higher hardware efficiency. The system is experimentally verified on XilinxVirtex 4 field programmable gate array (FPGA) achieving small area utilisation and a throughput of 3.62 Gb/s. © The Institution of Engineering and Technology 2013.

  2. EzArray: A web-based highly automated Affymetrix expression array data management and analysis system

    Directory of Open Access Journals (Sweden)

    Zhu Yuelin

    2008-01-01

    Full Text Available Abstract Background Though microarray experiments are very popular in life science research, managing and analyzing microarray data are still challenging tasks for many biologists. Most microarray programs require users to have sophisticated knowledge of mathematics, statistics and computer skills for usage. With accumulating microarray data deposited in public databases, easy-to-use programs to re-analyze previously published microarray data are in high demand. Results EzArray is a web-based Affymetrix expression array data management and analysis system for researchers who need to organize microarray data efficiently and get data analyzed instantly. EzArray organizes microarray data into projects that can be analyzed online with predefined or custom procedures. EzArray performs data preprocessing and detection of differentially expressed genes with statistical methods. All analysis procedures are optimized and highly automated so that even novice users with limited pre-knowledge of microarray data analysis can complete initial analysis quickly. Since all input files, analysis parameters, and executed scripts can be downloaded, EzArray provides maximum reproducibility for each analysis. In addition, EzArray integrates with Gene Expression Omnibus (GEO and allows instantaneous re-analysis of published array data. Conclusion EzArray is a novel Affymetrix expression array data analysis and sharing system. EzArray provides easy-to-use tools for re-analyzing published microarray data and will help both novice and experienced users perform initial analysis of their microarray data from the location of data storage. We believe EzArray will be a useful system for facilities with microarray services and laboratories with multiple members involved in microarray data analysis. EzArray is freely available from http://www.ezarray.com/.

  3. Real-time video streaming in mobile cloud over heterogeneous wireless networks

    Science.gov (United States)

    Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos

    2012-06-01

    Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets

  4. Solute transport in streams of varying morphology inferred from a high resolution network of potentiometric wireless chloride sensors

    Science.gov (United States)

    Klaus, Julian; Smettem, Keith; Pfister, Laurent; Harris, Nick

    2017-04-01

    There is ongoing interest in understanding and quantifying the travel times and dispersion of solutes moving through stream environments, including the hyporheic zone and/or in-channel dead zones where retention affects biogeochemical cycling processes that are critical to stream ecosystem functioning. Modelling these transport and retention processes requires acquisition of tracer data from injection experiments where the concentrations are recorded downstream. Such experiments are often time consuming and costly, which may be the reason many modelling studies of chemical transport have tended to rely on relatively few well documented field case studies. This leads to the need of fast and cheap distributed sensor arrays that respond instantly and record chemical transport at points of interest on timescales of seconds at various locations in the stream environment. To tackle this challenge we present data from several tracer experiments carried out in the Attert river catchment in Luxembourg employing low-cost (in the order of a euro per sensor) potentiometric chloride sensors in a distributed array. We injected NaCl under various baseflow conditions in streams of different morphologies and observed solute transport at various distances and locations. This data is used to benchmark the sensors to data obtained from more expensive electrical conductivity meters. Furthermore, the data allowed spatial resolution of hydrodynamic mixing processes and identification of chemical 'dead zones' in the study reaches.

  5. Computer technology and computer programming research and strategies

    CERN Document Server

    Antonakos, James L

    2011-01-01

    Covering a broad range of new topics in computer technology and programming, this volume discusses encryption techniques, SQL generation, Web 2.0 technologies, and visual sensor networks. It also examines reconfigurable computing, video streaming, animation techniques, and more. Readers will learn about an educational tool and game to help students learn computer programming. The book also explores a new medical technology paradigm centered on wireless technology and cloud computing designed to overcome the problems of increasing health technology costs.

  6. Metal-coated microfluidic channels: An approach to eliminate streaming potential effects in nano biosensors.

    Science.gov (United States)

    Lee, Jieun; Wipf, Mathias; Mu, Luye; Adams, Chris; Hannant, Jennifer; Reed, Mark A

    2017-01-15

    We report a method to suppress streaming potential using an Ag-coated microfluidic channel on a p-type silicon nanowire (SiNW) array measured by a multiplexed electrical readout. The metal layer sets a constant electrical potential along the microfluidic channel for a given reference electrode voltage regardless of the flow velocity. Without the Ag layer, the magnitude and sign of the surface potential change on the SiNW depends on the flow velocity, width of the microfluidic channel and the device's location inside the microfluidic channel with respect to the reference electrode. Noise analysis of the SiNW array with and without the Ag coating in the fluidic channel shows that noise frequency peaks, resulting from the operation of a piezoelectric micropump, are eliminated using the Ag layer with two reference electrodes located at inlet and outlet. This strategy presents a simple platform to eliminate the streaming potential and can become a powerful tool for nanoscale potentiometric biosensors. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    Science.gov (United States)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-04-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  8. StreamStats: A water resources web application

    Science.gov (United States)

    Ries, Kernell G.; Guthrie, John G.; Rea, Alan H.; Steeves, Peter A.; Stewart, David W.

    2008-01-01

    Streamflow statistics, such as the 1-percent flood, the mean flow, and the 7-day 10-year low flow, are used by engineers, land managers, biologists, and many others to help guide decisions in their everyday work. For example, estimates of the 1-percent flood (the flow that is exceeded, on average, once in 100 years and has a 1-percent chance of being exceeded in any year, sometimes referred to as the 100-year flood) are used to create flood-plain maps that form the basis for setting insurance rates and land-use zoning. This and other streamflow statistics also are used for dam, bridge, and culvert design; water-supply planning and management; water-use appropriations and permitting; wastewater and industrial discharge permitting; hydropower facility design and regulation; and the setting of minimum required streamflows to protect freshwater ecosystems. In addition, researchers, planners, regulators, and others often need to know the physical and climatic characteristics of the drainage basins (basin characteristics) and the influence of human activities, such as dams and water withdrawals, on streamflow upstream from locations of interest to understand the mechanisms that control water availability and quality at those locations. Knowledge of the streamflow network and downstream human activities also is necessary to adequately determine whether an upstream activity, such as a water withdrawal, can be allowed without adversely affecting downstream activities.Streamflow statistics could be needed at any location along a stream. Most often, streamflow statistics are needed at ungaged sites, where no streamflow data are available to compute the statistics. At U.S. Geological Survey (USGS) streamflow data-collection stations, which include streamgaging stations, partial-record stations, and miscellaneous-measurement stations, streamflow statistics can be computed from available data for the stations. Streamflow data are collected continuously at streamgaging stations

  9. High performance multiple stream data transfer

    International Nuclear Information System (INIS)

    Rademakers, F.; Saiz, P.

    2001-01-01

    The ALICE detector at LHC (CERN), will record raw data at a rate of 1.2 Gigabytes per second. Trying to analyse all this data at CERN will not be feasible. As originally proposed by the MONARC project, data collected at CERN will be transferred to remote centres to use their computing infrastructure. The remote centres will reconstruct and analyse the events, and make available the results. Therefore high-rate data transfer between computing centres (Tiers) will become of paramount importance. The authors will present several tests that have been made between CERN and remote centres in Padova (Italy), Torino (Italy), Catania (Italy), Lyon (France), Ohio (United States), Warsaw (Poland) and Calcutta (India). These tests consisted, in a first stage, of sending raw data from CERN to the remote centres and back, using a ftp method that allows connections of several streams at the same time. Thanks to these multiple streams, it is possible to increase the rate at which the data is transferred. While several 'multiple stream ftp solutions' already exist, the authors' method is based on a parallel socket implementation which allows, besides files, also objects (or any large message) to be send in parallel. A prototype will be presented able to manage different transfers. This is the first step of a system to be implemented that will be able to take care of the connections with the remote centres to exchange data and monitor the status of the transfer

  10. STREAM: A First Programming Process

    DEFF Research Database (Denmark)

    Caspersen, Michael Edelgaard; Kölling, Michael

    2009-01-01

    to derive a programming process, STREAM, designed specifically for novices. STREAM is a carefully down-scaled version of a full and rich agile software engineering process particularly suited for novices learning object-oriented programming. In using it we hope to achieve two things: to help novice......Programming is recognized as one of seven grand challenges in computing education. Decades of research have shown that the major problems novices experience are composition-based—they may know what the individual programming language constructs are, but they do not know how to put them together....... Despite this fact, textbooks, educational practice, and programming education research hardly address the issue of teaching the skills needed for systematic development of programs. We provide a conceptual framework for incremental program development, called Stepwise Improvement, which unifies best...

  11. The LHCb Turbo stream

    Energy Technology Data Exchange (ETDEWEB)

    Puig, A., E-mail: albert.puig@cern.ch

    2016-07-11

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction. In the Turbo stream the trigger will write out a compact summary of physics objects containing all information necessary for analyses. This will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissioned and developed during 2015 with a selection of physics analyses. It is anticipated that the turbo stream will be adopted by an increasing number of analyses during the remainder of LHC Run II (2015–2018) and ultimately in Run III (starting in 2020) with the upgraded LHCb detector.

  12. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  13. Determination of the self purification of streams using tracers

    International Nuclear Information System (INIS)

    Salviano, J.S.

    1982-04-01

    A methodology for the 'in situ' evaluation of the self purification of streams is discussed. It consists of the simultaneous injection of two tracers into the stream. One of the tracers is oxidized by biochemical processes. It can be either artificially supplied to the stream or a naturally present component can be used. This tracer is used for the determination of the self purification parameters. The other tracer is conservative and allows for the hydrodynamic effects. Tests have been carried out in two streams with quite different hydrodynamic and physicochemical conditions. In the first stream, with a flow-rate of about 0.9 m 3 /s, urea was used as the nonconservative tracer. In the other stream, which had a flow-rate of about 5 m 3 /s, only a radioactive tracer has been used, and the rate of biochemical oxidation has been determined from BOD measurements. Calculations have been implemented on a digital computer. In both cases it was found that the reoxygenation rate is more conveniently determined by empirical formulas. Results from both tests have been deemed realistic by comparison with similar experiments. (Author) [pt

  14. Acoustic streaming in pulsating flows through porous media

    International Nuclear Information System (INIS)

    Valverde, J.M.; Dura'n-Olivencia, F.J.

    2014-01-01

    When a body immersed in a viscous fluid is subjected to a sound wave (or, equivalently, the body oscillates in the fluid otherwise at rest) a rotational fluid stream develops across a boundary layer nearby the fluid-body interphase. This so-called acoustic streaming phenomenon is responsible for a notable enhancement of heat, mass and momentum transfer and takes place in any process involving two phases subjected to relative oscillations. Understanding the fundamental mechanisms governing acoustic streaming in two-phase flows is of great interest for a wide range of applications such as sonoprocessed fluidized bed reactors, thermoacoustic refrigerators/engines, pulsatile flows through veins/arteries, hemodialysis devices, pipes in off-shore platforms, offshore piers, vibrating structures in the power-generating industry, lab-on-a-chip microfluidics and microgravity acoustic levitation, and solar thermal collectors to name a few. The aim of engineering studies on this vast diversity of systems is oriented towards maximizing the efficiency of each particular process. Even though practical problems are usually approached from disparate disciplines without any apparent linkage, the behavior of these systems is influenced by the same underlying physics. In general, acoustic streaming occurs within the interstices of porous media and usually in the presence of externally imposed steady fluid flows, which gives rise to important effects arising from the interference between viscous boundary layers developed around nearby solid surfaces and the nonlinear coupling between the oscillating and steady flows. This paper is mainly devoted to highlighting the fundamental physics behind acoustic streaming in porous media in order to provide a simple instrument to assess the relevance of this phenomenon in each particular application. The exact microscopic Navier-Stokes equations will be numerically solved for a simplified 2D system consisting of a regular array of oscillating

  15. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Directory of Open Access Journals (Sweden)

    Chen Homer H

    2007-01-01

    Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  16. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Science.gov (United States)

    Lu, Meng-Ting; Yao, Jason J.; Chen, Homer H.

    2007-12-01

    The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  17. A Streaming Language Implementation of the Discontinuous Galerkin Method

    Science.gov (United States)

    Barth, Timothy; Knight, Timothy

    2005-01-01

    We present a Brook streaming language implementation of the 3-D discontinuous Galerkin method for compressible fluid flow on tetrahedral meshes. Efficient implementation of the discontinuous Galerkin method using the streaming model of computation introduces several algorithmic design challenges. Using a cycle-accurate simulator, performance characteristics have been obtained for the Stanford Merrimac stream processor. The current Merrimac design achieves 128 Gflops per chip and the desktop board is populated with 16 chips yielding a peak performance of 2 Teraflops. Total parts cost for the desktop board is less than $20K. Current cycle-accurate simulations for discretizations of the 3-D compressible flow equations yield approximately 40-50% of the peak performance of the Merrimac streaming processor chip. Ongoing work includes the assessment of the performance of the same algorithm on the 2 Teraflop desktop board with a target goal of achieving 1 Teraflop performance.

  18. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  19. Application of the Hydroecological Integrity Assessment Process for Missouri Streams

    Science.gov (United States)

    Kennen, Jonathan G.; Henriksen, James A.; Heasley, John; Cade, Brian S.; Terrell, James W.

    2009-01-01

    Natural flow regime concepts and theories have established the justification for maintaining or restoring the range of natural hydrologic variability so that physiochemical processes, native biodiversity, and the evolutionary potential of aquatic and riparian assemblages can be sustained. A synthesis of recent research advances in hydroecology, coupled with stream classification using hydroecologically relevant indices, has produced the Hydroecological Integrity Assessment Process (HIP). HIP consists of (1) a regional classification of streams into hydrologic stream types based on flow data from long-term gaging-station records for relatively unmodified streams, (2) an identification of stream-type specific indices that address 11 subcomponents of the flow regime, (3) an ability to establish environmental flow standards, (4) an evaluation of hydrologic alteration, and (5) a capacity to conduct alternative analyses. The process starts with the identification of a hydrologic baseline (reference condition) for selected locations, uses flow data from a stream-gage network, and proceeds to classify streams into hydrologic stream types. Concurrently, the analysis identifies a set of non-redundant and ecologically relevant hydrologic indices for 11 subcomponents of flow for each stream type. Furthermore, regional hydrologic models for synthesizing flow conditions across a region and the development of flow-ecology response relations for each stream type can be added to further enhance the process. The application of HIP to Missouri streams identified five stream types ((1) intermittent, (2) perennial runoff-flashy, (3) perennial runoff-moderate baseflow, (4) perennial groundwater-stable, and (5) perennial groundwater-super stable). Two Missouri-specific computer software programs were developed: (1) a Missouri Hydrologic Assessment Tool (MOHAT) which is used to establish a hydrologic baseline, provide options for setting environmental flow standards, and compare past and

  20. Insertion characteristics and placement of the Mid-Scala electrode array in human temporal bones using detailed cone beam computed tomography.

    Science.gov (United States)

    Dietz, Aarno; Gazibegovic, Dzemal; Tervaniemi, Jyrki; Vartiainen, Veli-Matti; Löppönen, Heikki

    2016-12-01

    The aim of this study was to evaluate the insertion results and placement of the new Advanced Bionics HiFocus Mid-Scala (HFms) electrode array, inserted through the round window membrane, in eight fresh human temporal bones using cone beam computed tomography (CBCT). Pre- and post-insertion CBCT scans were registered to create a 3D reconstruction of the cochlea with the array inserted. With an image fusion technique both the bony edges of the cochlea and the electrode array in situ could accurately be determined, thus enabling to identify the exact position of the electrode array within the scala tympani. Vertical and horizontal scalar location was measured at four points along the cochlea base at an angular insertion depth of 90°, 180° and 270° and at electrode 16, the most basal electrode. Smooth insertion through the round window membrane was possible in all temporal bones. The imaging results showed that there were no dislocations from the scala tympani into the scala vestibule. The HFms electrode was positioned in the middle of the scala along the whole electrode array in three out of the eight bones and in 62 % of the individual locations measured along the base of the cochlea. In only one cochlea a close proximity of the electrode with the basilar membrane was observed, indicating possible contact with the basilar membrane. The results and assessments presented in this study appear to be highly accurate. Although a further validation including histopathology is needed, the image fusion technique described in this study represents currently the most accurate method for intracochlear electrode assessment obtainable with CBCT.

  1. Invasive tightly coupled processor arrays

    CERN Document Server

    LARI, VAHID

    2016-01-01

    This book introduces new massively parallel computer (MPSoC) architectures called invasive tightly coupled processor arrays. It proposes strategies, architecture designs, and programming interfaces for invasive TCPAs that allow invading and subsequently executing loop programs with strict requirements or guarantees of non-functional execution qualities such as performance, power consumption, and reliability. For the first time, such a configurable processor array architecture consisting of locally interconnected VLIW processing elements can be claimed by programs, either in full or in part, using the principle of invasive computing. Invasive TCPAs provide unprecedented energy efficiency for the parallel execution of nested loop programs by avoiding any global memory access such as GPUs and may even support loops with complex dependencies such as loop-carried dependencies that are not amenable to parallel execution on GPUs. For this purpose, the book proposes different invasion strategies for claiming a desire...

  2. The Stream-Catchment (StreamCat) and Lake-Catchment ...

    Science.gov (United States)

    Background/Question/MethodsLake and stream conditions respond to both natural and human-related landscape features. Characterizing these features within contributing areas (i.e., delineated watersheds) of streams and lakes could improve our understanding of how biological conditions vary spatially and improve the use, management, and restoration of these aquatic resources. However, the specialized geospatial techniques required to define and characterize stream and lake watersheds has limited their widespread use in both scientific and management efforts at large spatial scales. We developed the StreamCat and LakeCat Datasets to model, predict, and map the probable biological conditions of streams and lakes across the conterminous US (CONUS). Both StreamCat and LakeCat contain watershed-level characterizations of several hundred natural (e.g., soils, geology, climate, and land cover) and anthropogenic (e.g., urbanization, agriculture, mining, and forest management) landscape features for ca. 2.6 million stream segments and 376,000 lakes across the CONUS, respectively. These datasets can be paired with field samples to provide independent variables for modeling and other analyses. We paired 1,380 stream and 1,073 lake samples from the USEPAs National Aquatic Resource Surveys with StreamCat and LakeCat and used random forest (RF) to model and then map an invertebrate condition index and chlorophyll a concentration, respectively. Results/ConclusionsThe invertebrate

  3. The LHCb Turbo stream

    CERN Document Server

    AUTHOR|(CDS)2070171

    2016-01-01

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction. In the Turbo stream the trigger will write out a compact summary of physics objects containing all information necessary for analyses. This will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissioned and developed during 2015 wi...

  4. Tidal Turbines’ Layout in a Stream with Asymmetry and Misalignment

    Directory of Open Access Journals (Sweden)

    Nicolas Guillou

    2017-11-01

    Full Text Available A refined assessment of tidal currents variability is a prerequisite for successful turbine deployment in the marine environment. However, the numerical evaluation of the tidal kinetic energy resource relies, most of the time, on integrated parameters, such as the averaged or maximum stream powers. Predictions from a high resolution three-dimensional model are exploited here to characterize the asymmetry and misalignment between the flood and ebb tidal currents in the “Raz de Sein”, a strait off western Brittany (France with strong potential for array development. A series of parameters is considered to assess resource variability and refine the cartography of local potential tidal stream energy sites. The strait is characterized by strong tidal flow divergence with currents’ asymmetry liable to vary output power by 60% over a tidal cycle. Pronounced misalignments over 20 ∘ are furthermore identified in a great part of energetic locations, and this may account for a deficit of the monthly averaged extractable energy by more than 12%. As sea space is limited for turbines, it is finally suggested to aggregate flood and ebb-dominant stream powers on both parts of the strait to output energy with reduced asymmetry.

  5. A PCA-Based Change Detection Framework for Multidimensional Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2015-08-10

    Detecting changes in multidimensional data streams is an important and challenging task. In unsupervised change detection, changes are usually detected by comparing the distribution in a current (test) window with a reference window. It is thus essential to design divergence metrics and density estimators for comparing the data distributions, which are mostly done for univariate data. Detecting changes in multidimensional data streams brings difficulties to the density estimation and comparisons. In this paper, we propose a framework for detecting changes in multidimensional data streams based on principal component analysis, which is used for projecting data into a lower dimensional space, thus facilitating density estimation and change-score calculations. The proposed framework also has advantages over existing approaches by reducing computational costs with an efficient density estimator, promoting the change-score calculation by introducing effective divergence metrics, and by minimizing the efforts required from users on the threshold parameter setting by using the Page-Hinkley test. The evaluation results on synthetic and real data show that our framework outperforms two baseline methods in terms of both detection accuracy and computational costs.

  6. Streaming Pool: reuse, combine and create reactive streams with pleasure

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    When connecting together heterogeneous and complex systems, it is not easy to exchange data between components. Streams of data are successfully used in industry in order to overcome this problem, especially in the case of "live" data. Streams are a specialization of the Observer design pattern and they provide asynchronous and non-blocking data flow. The ongoing effort of the ReactiveX initiative is one example that demonstrates how demanding this technology is even for big companies. Bridging the discrepancies of different technologies with common interfaces is already done by the Reactive Streams initiative and, in the JVM world, via reactive-streams-jvm interfaces. Streaming Pool is a framework for providing and discovering reactive streams. Through the mechanism of dependency injection provided by the Spring Framework, Streaming Pool provides a so called Discovery Service. This object can discover and chain streams of data that are technologically agnostic, through the use of Stream IDs. The stream to ...

  7. Experimental study of surface insulated-standard hybrid tungsten planar wire array Z-pinches at “QiangGuang-I” facility

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Liang; Peng, Bodong; Yuan, Yuan; Zhang, Mei; Zhao, Chen; Zhao, Jizhen; Wang, Liangping [State Key Laboratory of Intense Pulsed Radiation Simulation and Effect (Northwest Institute of Nuclear Technology), Xi' an 710024 (China); Li, Yang, E-mail: liyang@nint.ac.cn; Li, Mo [State Key Laboratory of Intense Pulsed Radiation Simulation and Effect (Northwest Institute of Nuclear Technology), Xi' an 710024 (China); Xi' an Jiaotong University, Xi' an 710049 (China)

    2016-01-15

    The experimental results of the insulated-standard hybrid wire array Z pinches carried out on “QiangGuang-I” facility at Northwest Institute of Nuclear Technology were presented and discussed. The surface insulating can impose a significant influence on the dynamics and radiation characteristics of the hybrid wire array Z pinches, especially on the early stage (t/t{sub imp} < 0.6). The expansion of insulated wires at the ablation stage is suppressed, while the streams stripped from the insulated wires move faster than that from the standard wires. The foot radiation of X-ray is enhanced by increment of the number of insulated wires, 19.6 GW, 33.6 GW, and 68.6 GW for shots 14037S, 14028H, and 14039I, respectively. The surface insulation also introduces nonhomogeneity along the single wire—the streams move much faster near the electrodes. The colliding boundary of the hybrid wire array Z pinches is bias to the insulated side approximately 0.6 mm.

  8. SAQC: SNP Array Quality Control

    Directory of Open Access Journals (Sweden)

    Li Ling-Hui

    2011-04-01

    Full Text Available Abstract Background Genome-wide single-nucleotide polymorphism (SNP arrays containing hundreds of thousands of SNPs from the human genome have proven useful for studying important human genome questions. Data quality of SNP arrays plays a key role in the accuracy and precision of downstream data analyses. However, good indices for assessing data quality of SNP arrays have not yet been developed. Results We developed new quality indices to measure the quality of SNP arrays and/or DNA samples and investigated their statistical properties. The indices quantify a departure of estimated individual-level allele frequencies (AFs from expected frequencies via standardized distances. The proposed quality indices followed lognormal distributions in several large genomic studies that we empirically evaluated. AF reference data and quality index reference data for different SNP array platforms were established based on samples from various reference populations. Furthermore, a confidence interval method based on the underlying empirical distributions of quality indices was developed to identify poor-quality SNP arrays and/or DNA samples. Analyses of authentic biological data and simulated data show that this new method is sensitive and specific for the detection of poor-quality SNP arrays and/or DNA samples. Conclusions This study introduces new quality indices, establishes references for AFs and quality indices, and develops a detection method for poor-quality SNP arrays and/or DNA samples. We have developed a new computer program that utilizes these methods called SNP Array Quality Control (SAQC. SAQC software is written in R and R-GUI and was developed as a user-friendly tool for the visualization and evaluation of data quality of genome-wide SNP arrays. The program is available online (http://www.stat.sinica.edu.tw/hsinchou/genetics/quality/SAQC.htm.

  9. Self-assembled ordered carbon-nanotube arrays and membranes.

    Energy Technology Data Exchange (ETDEWEB)

    Overmyer, Donald L.; Siegal, Michael P.; Yelton, William Graham

    2004-11-01

    Imagine free-standing flexible membranes with highly-aligned arrays of carbon nanotubes (CNTs) running through their thickness. Perhaps with both ends of the CNTs open for highly controlled nanofiltration? Or CNTs at heights uniformly above a polymer membrane for a flexible array of nanoelectrodes or field-emitters? How about CNT films with incredible amounts of accessible surface area for analyte adsorption? These self-assembled crystalline nanotubes consist of multiple layers of graphene sheets rolled into concentric cylinders. Tube diameters (3-300 nm), inner-bore diameters (2-15 nm), and lengths (nanometers - microns) are controlled to tailor physical, mechanical, and chemical properties. We proposed to explore growth and characterize nanotube arrays to help determine their exciting functionality for Sandia applications. Thermal chemical vapor deposition growth in a furnace nucleates from a metal catalyst. Ordered arrays grow using templates from self-assembled hexagonal arrays of nanopores in anodized-aluminum oxide. Polymeric-binders can mechanically hold the CNTs in place for polishing, lift-off, and membrane formation. The stiffness, electrical and thermal conductivities of CNTs make them ideally suited for a wide-variety of possible applications. Large-area, highly-accessible gas-adsorbing carbon surfaces, superb cold-cathode field-emission, and unique nanoscale geometries can lead to advanced microsensors using analyte adsorption, arrays of functionalized nanoelectrodes for enhanced electrochemical detection of biological/explosive compounds, or mass-ionizers for gas-phase detection. Materials studies involving membrane formation may lead to exciting breakthroughs in nanofiltration/nanochromatography for the separation of chemical and biological agents. With controlled nanofilter sizes, ultrafiltration will be viable to separate and preconcentrate viruses and many strains of bacteria for 'down-stream' analysis.

  10. Interaction between stream temperature, streamflow, and groundwater exchanges in alpine streams

    Science.gov (United States)

    Constantz, James E.

    1998-01-01

    Four alpine streams were monitored to continuously collect stream temperature and streamflow for periods ranging from a week to a year. In a small stream in the Colorado Rockies, diurnal variations in both stream temperature and streamflow were significantly greater in losing reaches than in gaining reaches, with minimum streamflow losses occurring early in the day and maximum losses occurring early in the evening. Using measured stream temperature changes, diurnal streambed infiltration rates were predicted to increase as much as 35% during the day (based on a heat and water transport groundwater model), while the measured increase in streamflow loss was 40%. For two large streams in the Sierra Nevada Mountains, annual stream temperature variations ranged from 0° to 25°C. In summer months, diurnal stream temperature variations were 30–40% of annual stream temperature variations, owing to reduced streamflows and increased atmospheric heating. Previous reports document that one Sierra stream site generally gains groundwater during low flows, while the second Sierra stream site may lose water during low flows. For August the diurnal streamflow variation was 11% at the gaining stream site and 30% at the losing stream site. On the basis of measured diurnal stream temperature variations, streambed infiltration rates were predicted to vary diurnally as much as 20% at the losing stream site. Analysis of results suggests that evapotranspiration losses determined diurnal streamflow variations in the gaining reaches, while in the losing reaches, evapotranspiration losses were compounded by diurnal variations in streambed infiltration. Diurnal variations in stream temperature were reduced in the gaining reaches as a result of discharging groundwater of relatively constant temperature. For the Sierra sites, comparison of results with those from a small tributary demonstrated that stream temperature patterns were useful in delineating discharges of bank storage following

  11. Shielding in ungated field emitter arrays

    Energy Technology Data Exchange (ETDEWEB)

    Harris, J. R. [U.S. Navy Reserve, Navy Operational Support Center New Orleans, New Orleans, Louisiana 70143 (United States); Jensen, K. L. [Code 6854, Naval Research Laboratory, Washington, D.C. 20375 (United States); Shiffler, D. A. [Directed Energy Directorate, Air Force Research Laboratory, Albuquerque, New Mexico 87117 (United States); Petillo, J. J. [Leidos, Billerica, Massachusetts 01821 (United States)

    2015-05-18

    Cathodes consisting of arrays of high aspect ratio field emitters are of great interest as sources of electron beams for vacuum electronic devices. The desire for high currents and current densities drives the cathode designer towards a denser array, but for ungated emitters, denser arrays also lead to increased shielding, in which the field enhancement factor β of each emitter is reduced due to the presence of the other emitters in the array. To facilitate the study of these arrays, we have developed a method for modeling high aspect ratio emitters using tapered dipole line charges. This method can be used to investigate proximity effects from similar emitters an arbitrary distance away and is much less computationally demanding than competing simulation approaches. Here, we introduce this method and use it to study shielding as a function of array geometry. Emitters with aspect ratios of 10{sup 2}–10{sup 4} are modeled, and the shielding-induced reduction in β is considered as a function of tip-to-tip spacing for emitter pairs and for large arrays with triangular and square unit cells. Shielding is found to be negligible when the emitter spacing is greater than the emitter height for the two-emitter array, or about 2.5 times the emitter height in the large arrays, in agreement with previously published results. Because the onset of shielding occurs at virtually the same emitter spacing in the square and triangular arrays, the triangular array is preferred for its higher emitter density at a given emitter spacing. The primary contribution to shielding in large arrays is found to come from emitters within a distance of three times the unit cell spacing for both square and triangular arrays.

  12. Seasonal dynamics of ichthyodiversity in a hill stream of the Darjeeling Himalaya, West Bengal, india

    Directory of Open Access Journals (Sweden)

    M.L. Acharjee

    2014-12-01

    Full Text Available The small torrential spring-fed hill-stream Relli in the Darjeeling Himalaya of West Bengal was studied from March 2007 to February 2009 to assess seasonal dynamics and diversity of fish populations.  The study revealed that the stream sustained 25 rheophilic cold water fish species from 15 genera and five families having ornamental, food and sport value.  Seven omnivorous species were abundantly found, and the array of juveniles and sub adults suggests this stream is used as a breeding and nursery ground for some species.  The stream harboured fish with unique modifications such as adhesive structures.  Analysis of monthly data indicate that abundance and diversity indices increased slightly during April–May and sharply during October–November, indicating significant seasonal variations with the low diversity observed during monsoon months reflecting environmental stresses.  Evenness was high in all sampling sites, and inversely related to the dominance index of ichthyofauna.  The density and diversity of fish assemblages along the gradient of the stream may be interrupted due to anthropogenic disturbances.  Our study provides baseline data which may be helpful for conservation and management of fish species, and in formulating fishery policy. 

  13. Phased Array Imaging of Complex-Geometry Composite Components.

    Science.gov (United States)

    Brath, Alex J; Simonetti, Francesco

    2017-10-01

    Progress in computational fluid dynamics and the availability of new composite materials are driving major advances in the design of aerospace engine components which now have highly complex geometries optimized to maximize system performance. However, shape complexity poses significant challenges to traditional nondestructive evaluation methods whose sensitivity and selectivity rapidly decrease as surface curvature increases. In addition, new aerospace materials typically exhibit an intricate microstructure that further complicates the inspection. In this context, an attractive solution is offered by combining ultrasonic phased array (PA) technology with immersion testing. Here, the water column formed between the complex surface of the component and the flat face of a linear or matrix array probe ensures ideal acoustic coupling between the array and the component as the probe is continuously scanned to form a volumetric rendering of the part. While the immersion configuration is desirable for practical testing, the interpretation of the measured ultrasonic signals for image formation is complicated by reflection and refraction effects that occur at the water-component interface. To account for refraction, the geometry of the interface must first be reconstructed from the reflected signals and subsequently used to compute suitable delay laws to focus inside the component. These calculations are based on ray theory and can be computationally intensive. Moreover, strong reflections from the interface can lead to a thick dead zone beneath the surface of the component which limits sensitivity to shallow subsurface defects. This paper presents a general approach that combines advanced computing for rapid ray tracing in anisotropic media with a 256-channel parallel array architecture. The full-volume inspection of complex-shape components is enabled through the combination of both reflected and transmitted signals through the part using a pair of arrays held in a yoke

  14. An economic analysis of online streaming: How the music industry can generate revenues from cloud computing

    OpenAIRE

    Thomes, Tim Paul

    2011-01-01

    This paper investigates the upcoming business model of online streaming services allowing music consumers either to subscribe to a service which provides free-of-charge access to streaming music and which is funded by advertising, or to pay a monthly flat fee in order to get ad-free access to the content of the service accompanied with additional benefits. Both businesses will be launched by a single provider of streaming music. By imposing a two-sided market model on the one hand combined wi...

  15. Real-Time Joint Streaming Data Processing from Social and Physical Sensors

    Science.gov (United States)

    Kropivnitskaya, Y. Y.; Qin, J.; Tiampo, K. F.; Bauer, M.

    2014-12-01

    The results of the technological breakthroughs in computing that have taken place over the last few decades makes it possible to achieve emergency management objectives that focus on saving human lives and decreasing economic effects. In particular, the integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better real-time seismic hazard analysis through distributed computing networks. The main goal of this work is to utilize innovative computational algorithms for better real-time seismic risk analysis by integrating different data sources and processing tools into streaming and cloud computing applications. The Geological Survey of Canada operates the Canadian National Seismograph Network (CNSN) with over 100 high-gain instruments and 60 low-gain or strong motion seismographs. The processing of the continuous data streams from each station of the CNSN provides the opportunity to detect possible earthquakes in near real-time. The information from physical sources is combined to calculate a location and magnitude for an earthquake. The automatically calculated results are not always sufficiently precise and prompt that can significantly reduce the response time to a felt or damaging earthquake. Social sensors, here represented as Twitter users, can provide information earlier to the general public and more rapidly to the emergency planning and disaster relief agencies. We introduce joint streaming data processing from social and physical sensors in real-time based on the idea that social media observations serve as proxies for physical sensors. By using the streams of data in the form of Twitter messages, each of which has an associated time and location, we can extract information related to a target event and perform enhanced analysis by combining it with physical sensor data. Results of this work suggest that the use of data from social media, in conjunction

  16. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  17. Relation between Streaming Potential and Streaming Electrification Generated by Streaming of Water through a Sandwich-type Cell

    OpenAIRE

    Maruyama, Kazunori; Nikaido, Mitsuru; Hara, Yoshinori; Tanizaki, Yoshie

    2012-01-01

    Both streaming potential and accumulated charge of water flowed out were measured simultaneously using a sandwich-type cell. The voltages generated in divided sections along flow direction satisfied additivity. The sign of streaming potential agreed with that of streaming electrification. The relation between streaming potential and streaming electrification was explained from a viewpoint of electrical double layer in glass-water interface.

  18. An Association-Oriented Partitioning Approach for Streaming Graph Query

    Directory of Open Access Journals (Sweden)

    Yun Hao

    2017-01-01

    Full Text Available The volumes of real-world graphs like knowledge graph are increasing rapidly, which makes streaming graph processing a hot research area. Processing graphs in streaming setting poses significant challenges from different perspectives, among which graph partitioning method plays a key role. Regarding graph query, a well-designed partitioning method is essential for achieving better performance. Existing offline graph partitioning methods often require full knowledge of the graph, which is not possible during streaming graph processing. In order to handle this problem, we propose an association-oriented streaming graph partitioning method named Assc. This approach first computes the rank values of vertices with a hybrid approximate PageRank algorithm. After splitting these vertices with an adapted variant affinity propagation algorithm, the process order on vertices in the sliding window can be determined. Finally, according to the level of these vertices and their association, the partition where the vertices should be distributed is decided. We compare its performance with a set of streaming graph partition methods and METIS, a widely adopted offline approach. The results show that our solution can partition graphs with hundreds of millions of vertices in streaming setting on a large collection of graph datasets and our approach outperforms other graph partitioning methods.

  19. Stream on the Sky: Outsourcing Access Control Enforcement for Stream Data to the Cloud

    OpenAIRE

    Dinh, Tien Tuan Anh; Datta, Anwitaman

    2012-01-01

    There is an increasing trend for businesses to migrate their systems towards the cloud. Security concerns that arise when outsourcing data and computation to the cloud include data confidentiality and privacy. Given that a tremendous amount of data is being generated everyday from plethora of devices equipped with sensing capabilities, we focus on the problem of access controls over live streams of data based on triggers or sliding windows, which is a distinct and more challenging problem tha...

  20. Linear perturbation theory for tidal streams and the small-scale CDM power spectrum

    Science.gov (United States)

    Bovy, Jo; Erkal, Denis; Sanders, Jason L.

    2017-04-01

    Tidal streams in the Milky Way are sensitive probes of the population of low-mass dark matter subhaloes predicted in cold dark matter (CDM) simulations. We present a new calculus for computing the effect of subhalo fly-bys on cold streams based on the action-angle representation of streams. The heart of this calculus is a line-of-parallel-angle approach that calculates the perturbed distribution function of a stream segment by undoing the effect of all relevant impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 105 M⊙, accounting for the stream's internal dispersion and overlapping impacts. We study the statistical properties of density and track fluctuations with large suites of simulations of the effect of subhalo fly-bys. The one-dimensional density and track power spectra along the stream trace the subhalo mass function, with higher mass subhaloes producing power only on large scales, while lower mass subhaloes cause structure on smaller scales. We also find significant density and track bispectra that are observationally accessible. We further demonstrate that different projections of the track all reflect the same pattern of perturbations, facilitating their observational measurement. We apply this formalism to data for the Pal 5 stream and make a first rigorous determination of 10^{+11}_{-6} dark matter subhaloes with masses between 106.5 and 109 M⊙ within 20 kpc from the Galactic centre [corresponding to 1.4^{+1.6}_{-0.9} times the number predicted by CDM-only simulations or to fsub(r matter is clumpy on the smallest scales relevant for galaxy formation.

  1. Galaxies with jet streams

    International Nuclear Information System (INIS)

    Breuer, R.

    1981-01-01

    Describes recent research work on supersonic gas flow. Notable examples have been observed in cosmic radio sources, where jet streams of galactic dimensions sometimes occur, apparently as the result of interaction between neighbouring galaxies. The current theory of jet behaviour has been convincingly demonstrated using computer simulation. The surprisingly long-term stability is related to the supersonic velocity, and is analagous to the way in which an Appollo spacecraft re-entering the atmosphere supersonically is protected by the gas from the burning shield. (G.F.F.)

  2. Bridging Scales: A Model-Based Assessment of the Technical Tidal-Stream Energy Resource off Massachusetts, USA

    Science.gov (United States)

    Cowles, G. W.; Hakim, A.; Churchill, J. H.

    2016-02-01

    Tidal in-stream energy conversion (TISEC) facilities provide a highly predictable and dependable source of energy. Given the economic and social incentives to migrate towards renewable energy sources there has been tremendous interest in the technology. Key challenges to the design process stem from the wide range of problem scales extending from device to array. In the present approach we apply a multi-model approach to bridge the scales of interest and select optimal device geometries to estimate the technical resource for several realistic sites in the coastal waters of Massachusetts, USA. The approach links two computational models. To establish flow conditions at site scales ( 10m), a barotropic setup of the unstructured grid ocean model FVCOM is employed. The model is validated using shipboard and fixed ADCP as well as pressure data. For device scale, the structured multiblock flow solver SUmb is selected. A large ensemble of simulations of 2D cross-flow tidal turbines is used to construct a surrogate design model. The surrogate model is then queried using velocity profiles extracted from the tidal model to determine the optimal geometry for the conditions at each site. After device selection, the annual technical yield of the array is evaluated with FVCOM using a linear momentum actuator disk approach to model the turbines. Results for several key Massachusetts sites including comparison with theoretical approaches will be presented.

  3. Introduction to parallel algorithms and architectures arrays, trees, hypercubes

    CERN Document Server

    Leighton, F Thomson

    1991-01-01

    Introduction to Parallel Algorithms and Architectures: Arrays Trees Hypercubes provides an introduction to the expanding field of parallel algorithms and architectures. This book focuses on parallel computation involving the most popular network architectures, namely, arrays, trees, hypercubes, and some closely related networks.Organized into three chapters, this book begins with an overview of the simplest architectures of arrays and trees. This text then presents the structures and relationships between the dominant network architectures, as well as the most efficient parallel algorithms for

  4. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    Science.gov (United States)

    Li, Chenhui; Baciu, George; Han, Yu

    2018-03-01

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  5. Analyzing indicators of stream health for Minnesota streams

    Science.gov (United States)

    Singh, U.; Kocian, M.; Wilson, B.; Bolton, A.; Nieber, J.; Vondracek, B.; Perry, J.; Magner, J.

    2005-01-01

    Recent research has emphasized the importance of using physical, chemical, and biological indicators of stream health for diagnosing impaired watersheds and their receiving water bodies. A multidisciplinary team at the University of Minnesota is carrying out research to develop a stream classification system for Total Maximum Daily Load (TMDL) assessment. Funding for this research is provided by the United States Environmental Protection Agency and the Minnesota Pollution Control Agency. One objective of the research study involves investigating the relationships between indicators of stream health and localized stream characteristics. Measured data from Minnesota streams collected by various government and non-government agencies and research institutions have been obtained for the research study. Innovative Geographic Information Systems tools developed by the Environmental Science Research Institute and the University of Texas are being utilized to combine and organize the data. Simple linear relationships between index of biological integrity (IBI) and channel slope, two-year stream flow, and drainage area are presented for the Redwood River and the Snake River Basins. Results suggest that more rigorous techniques are needed to successfully capture trends in IBI scores. Additional analyses will be done using multiple regression, principal component analysis, and clustering techniques. Uncovering key independent variables and understanding how they fit together to influence stream health are critical in the development of a stream classification for TMDL assessment.

  6. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali; Wang, Suojin; Zhang, Xiangliang

    2016-01-01

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  7. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2016-11-08

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  8. Modeling Evaluation of Tidal Stream Energy and the Impacts of Energy Extraction on Hydrodynamics in the Taiwan Strait

    Directory of Open Access Journals (Sweden)

    Ming-Hsi Hsu

    2013-04-01

    Full Text Available Tidal stream speeds in straits are accelerated because of geographic and bathymetric features. For instance, narrow channels and shallows can cause high tidal stream energy. In this study, water level and tidal current were simulated using a three-dimensional semi-implicit Eulerian-Lagrangian finite-element model to investigate the complex tidal characteristics in the Taiwan Strait and to determine potential locations for harnessing tidal stream energy. The model was driven by nine tidal components (M2, S2, N2, K2, K1, O1, P1, Q1, and M4 at open boundaries. The modeling results were validated with the measured data, including water level and tidal current. Through the model simulations, we found that the highest tidal currents occurred at the Penghu Channel in the Taiwan Strait. The Penghu Channel is an appropriate location for the deployment of a tidal turbine array because of its deep and flat bathymetry. The impacts of energy extraction on hydrodynamics were assessed by considering the momentum sink approach. The simulated results indicate that only minimal impacts would occur on water level and tidal current in the Taiwan Strait if a turbine array (55 turbines was installed in the Penghu Channel.

  9. Optimisation of the conditions for stripping voltammetric analysis at liquid-liquid interfaces supported at micropore arrays: a computational simulation.

    Science.gov (United States)

    Strutwolf, Jörg; Arrigan, Damien W M

    2010-10-01

    Micropore membranes have been used to form arrays of microinterfaces between immiscible electrolyte solutions (µITIES) as a basis for the sensing of non-redox-active ions. Implementation of stripping voltammetry as a sensing method at these arrays of µITIES was applied recently to detect drugs and biomolecules at low concentrations. The present study uses computational simulation to investigate the optimum conditions for stripping voltammetric sensing at the µITIES array. In this scenario, the diffusion of ions in both the aqueous and the organic phases contributes to the sensing response. The influence of the preconcentration time, the micropore aspect ratio, the location of the microinterface within the pore, the ratio of the diffusion coefficients of the analyte ion in the organic and aqueous phases, and the pore wall angle were investigated. The simulations reveal that the accessibility of the microinterfaces during the preconcentration period should not be hampered by a recessed interface and that diffusional transport in the phase where the analyte ions are preconcentrated should be minimized. This will ensure that the ions are accumulated within the micropores close to the interface and thus be readily available for back transfer during the stripping process. On the basis of the results, an optimal combination of the examined parameters is proposed, which together improve the stripping voltammetric signal and provide an improvement in the detection limit.

  10. Development and applications of a computer-aided phased array assembly for ultrasonic testing

    International Nuclear Information System (INIS)

    Schenk, G.; Montag, H.J.; Wuestenberg, H.; Erhard, A.

    1985-01-01

    The use of modern electronic equipment for programmable signal delay increasingly allows transit-time controlled phased arrays to be applied in non-destructive, ultrasonic materials testing. A phased-array assembly is described permitting fast variation of incident angle of acoustic wave and of sonic beam focus, together with numerical evaluation of measured data. Phased arrays can be optimized by adding programmable electronic equipment so that the quality of conventional designs can be achieved. Applications of the new technical improvement are explained, referring to stress corrosion cracking, turbine testing, echo tomography of welded joints. (orig./HP) [de

  11. Signal Space Separation Method for a Biomagnetic Sensor Array Arranged on a Flat Plane for Magnetocardiographic Applications: A Computer Simulation Study

    Science.gov (United States)

    2018-01-01

    Although the signal space separation (SSS) method can successfully suppress interference/artifacts overlapped onto magnetoencephalography (MEG) signals, the method is considered inapplicable to data from nonhelmet-type sensor arrays, such as the flat sensor arrays typically used in magnetocardiographic (MCG) applications. This paper shows that the SSS method is still effective for data measured from a (nonhelmet-type) array of sensors arranged on a flat plane. By using computer simulations, it is shown that the optimum location of the origin can be determined by assessing the dependence of signal and noise gains of the SSS extractor on the origin location. The optimum values of the parameters LC and LD, which, respectively, indicate the truncation values of the multipole-order ℓ of the internal and external subspaces, are also determined by evaluating dependences of the signal, noise, and interference gains (i.e., the shield factor) on these parameters. The shield factor exceeds 104 for interferences originating from fairly distant sources. However, the shield factor drops to approximately 100 when calibration errors of 0.1% exist and to 30 when calibration errors of 1% exist. The shielding capability can be significantly improved using vector sensors, which measure the x, y, and z components of the magnetic field. With 1% calibration errors, a vector sensor array still maintains a shield factor of approximately 500. It is found that the SSS application to data from flat sensor arrays causes a distortion in the signal magnetic field, but it is shown that the distortion can be corrected by using an SSS-modified sensor lead field in the voxel space analysis. PMID:29854364

  12. Signal Space Separation Method for a Biomagnetic Sensor Array Arranged on a Flat Plane for Magnetocardiographic Applications: A Computer Simulation Study

    Directory of Open Access Journals (Sweden)

    Kensuke Sekihara

    2018-01-01

    Full Text Available Although the signal space separation (SSS method can successfully suppress interference/artifacts overlapped onto magnetoencephalography (MEG signals, the method is considered inapplicable to data from nonhelmet-type sensor arrays, such as the flat sensor arrays typically used in magnetocardiographic (MCG applications. This paper shows that the SSS method is still effective for data measured from a (nonhelmet-type array of sensors arranged on a flat plane. By using computer simulations, it is shown that the optimum location of the origin can be determined by assessing the dependence of signal and noise gains of the SSS extractor on the origin location. The optimum values of the parameters LC and LD, which, respectively, indicate the truncation values of the multipole-order ℓ of the internal and external subspaces, are also determined by evaluating dependences of the signal, noise, and interference gains (i.e., the shield factor on these parameters. The shield factor exceeds 104 for interferences originating from fairly distant sources. However, the shield factor drops to approximately 100 when calibration errors of 0.1% exist and to 30 when calibration errors of 1% exist. The shielding capability can be significantly improved using vector sensors, which measure the x, y, and z components of the magnetic field. With 1% calibration errors, a vector sensor array still maintains a shield factor of approximately 500. It is found that the SSS application to data from flat sensor arrays causes a distortion in the signal magnetic field, but it is shown that the distortion can be corrected by using an SSS-modified sensor lead field in the voxel space analysis.

  13. Supporting seamless mobility for P2P live streaming.

    Science.gov (United States)

    Kim, Eunsam; Kim, Sangjin; Lee, Choonhwa

    2014-01-01

    With advent of various mobile devices with powerful networking and computing capabilities, the users' demand to enjoy live video streaming services such as IPTV with mobile devices has been increasing rapidly. However, it is challenging to get over the degradation of service quality due to data loss caused by the handover. Although many handover schemes were proposed at protocol layers below the application layer, they inherently suffer from data loss while the network is being disconnected during the handover. We therefore propose an efficient application-layer handover scheme to support seamless mobility for P2P live streaming. By simulation experiments, we show that the P2P live streaming system with our proposed handover scheme can improve the playback continuity significantly compared to that without our scheme.

  14. Supporting Seamless Mobility for P2P Live Streaming

    Directory of Open Access Journals (Sweden)

    Eunsam Kim

    2014-01-01

    Full Text Available With advent of various mobile devices with powerful networking and computing capabilities, the users' demand to enjoy live video streaming services such as IPTV with mobile devices has been increasing rapidly. However, it is challenging to get over the degradation of service quality due to data loss caused by the handover. Although many handover schemes were proposed at protocol layers below the application layer, they inherently suffer from data loss while the network is being disconnected during the handover. We therefore propose an efficient application-layer handover scheme to support seamless mobility for P2P live streaming. By simulation experiments, we show that the P2P live streaming system with our proposed handover scheme can improve the playback continuity significantly compared to that without our scheme.

  15. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  16. Capacitive micromachined ultrasonic transducer arrays as tunable acoustic metamaterials

    Energy Technology Data Exchange (ETDEWEB)

    Lani, Shane W., E-mail: shane.w.lani@gmail.com, E-mail: karim.sabra@me.gatech.edu, E-mail: levent.degertekin@me.gatech.edu; Sabra, Karim G. [George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, 801Ferst Drive, Georgia 30332-0405 (United States); Wasequr Rashid, M.; Hasler, Jennifer [School of Electrical and Computer Engineering, Georgia Institute of Technology, Van Leer Electrical Engineering Building, 777 Atlantic Drive NW, Atlanta, Georgia 30332-0250 (United States); Levent Degertekin, F. [George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, 801Ferst Drive, Georgia 30332-0405 (United States); School of Electrical and Computer Engineering, Georgia Institute of Technology, Van Leer Electrical Engineering Building, 777 Atlantic Drive NW, Atlanta, Georgia 30332-0250 (United States)

    2014-02-03

    Capacitive Micromachined Ultrasonic Transducers (CMUTs) operating in immersion support dispersive evanescent waves due to the subwavelength periodic structure of electrostatically actuated membranes in the array. Evanescent wave characteristics also depend on the membrane resonance which is modified by the externally applied bias voltage, offering a mechanism to tune the CMUT array as an acoustic metamaterial. The dispersion and tunability characteristics are examined using a computationally efficient, mutual radiation impedance based approach to model a finite-size array and realistic parameters of variation. The simulations are verified, and tunability is demonstrated by experiments on a linear CMUT array operating in 2-12 MHz range.

  17. A computational study for investigating acoustic streaming and tissue heating during high intensity focused ultrasound through blood vessel with an obstacle

    Science.gov (United States)

    Parvin, Salma; Sultana, Aysha

    2017-06-01

    The influence of High Intensity Focused Ultrasound (HIFU) on the obstacle through blood vessel is studied numerically. A three-dimensional acoustics-thermal-fluid coupling model is employed to compute the temperature field around the obstacle through blood vessel. The model construction is based on the linear Westervelt and conjugate heat transfer equations for the obstacle through blood vessel. The system of equations is solved using Finite Element Method (FEM). We found from this three-dimensional numerical study that the rate of heat transfer is increasing from the obstacle and both the convective cooling and acoustic streaming can considerably change the temperature field.

  18. Drainage basins, channels, and flow characteristics of selected streams in central Pennsylvania

    Science.gov (United States)

    Brush, Lucien M.

    1961-01-01

    The hydraulic, basin, and geologic characteristics of 16 selected streams in central Pennsylvania were measured for the purpose of studying the relations among these general characteristics and their process of development. The basic parameters which were measured include bankfull width and depth, channel slope, bed material size and shape, length of stream from drainage divide, and size of drainage area. The kinds of bedrock over which the streams flow were noted. In these streams the bankfull channel is filled by flows approximating the 2.3-year flood. By measuring the breadth and mean depth of the channel, it was possible to compute the bankfull mean velocity for each of the 119 sampling stations. These data were then used to compute the downstream changes in hydraulic geometry of the streams studied. This method has been called an indirect computation of the hydraulic geometry. The results obtained by the indirect method are similar to those of the direct method of other workers. The basins were studied by examining the relations of drainage area, discharge, and length of stream from drainage divide. For the streams investigated, excellent correlations were found to exist between drainage area and the 2.3-year flood, as well as between length of stream from the basin divide and drainage area. From these correlations it is possible to predict the discharge for the 2.3-year flood at any arbitrary point along the length of the stream. The long, intermediate, and short axes of pebbles sampled from the bed of the stream were recorded to study both size and sphericity changes along individual streams and among the streams studied. No systematic downstream changes in sphericity were found. Particle size changes are erratic and show no consistent relation to channel slope. Particle size decreases downstream in many streams but remains constant or increases in others. Addition of material by tributaries is one factor affecting particle size and another is the parent

  19. Parallel algorithms for numerical linear algebra

    CERN Document Server

    van der Vorst, H

    1990-01-01

    This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p

  20. A computational modeling approach of the jet-like acoustic streaming and heat generation induced by low frequency high power ultrasonic horn reactors.

    Science.gov (United States)

    Trujillo, Francisco Javier; Knoerzer, Kai

    2011-11-01

    High power ultrasound reactors have gained a lot of interest in the food industry given the effects that can arise from ultrasonic-induced cavitation in liquid foods. However, most of the new food processing developments have been based on empirical approaches. Thus, there is a need for mathematical models which help to understand, optimize, and scale up ultrasonic reactors. In this work, a computational fluid dynamics (CFD) model was developed to predict the acoustic streaming and induced heat generated by an ultrasonic horn reactor. In the model it is assumed that the horn tip is a fluid inlet, where a turbulent jet flow is injected into the vessel. The hydrodynamic momentum rate of the incoming jet is assumed to be equal to the total acoustic momentum rate emitted by the acoustic power source. CFD velocity predictions show excellent agreement with the experimental data for power densities higher than W(0)/V ≥ 25kWm(-3). This model successfully describes hydrodynamic fields (streaming) generated by low-frequency-high-power ultrasound. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  1. StreamStats, version 4

    Science.gov (United States)

    Ries, Kernell G.; Newson, Jeremy K.; Smith, Martyn J.; Guthrie, John D.; Steeves, Peter A.; Haluska, Tana L.; Kolb, Katharine R.; Thompson, Ryan F.; Santoro, Richard D.; Vraga, Hans W.

    2017-10-30

    IntroductionStreamStats version 4, available at https://streamstats.usgs.gov, is a map-based web application that provides an assortment of analytical tools that are useful for water-resources planning and management, and engineering purposes. Developed by the U.S. Geological Survey (USGS), the primary purpose of StreamStats is to provide estimates of streamflow statistics for user-selected ungaged sites on streams and for USGS streamgages, which are locations where streamflow data are collected.Streamflow statistics, such as the 1-percent flood, the mean flow, and the 7-day 10-year low flow, are used by engineers, land managers, biologists, and many others to help guide decisions in their everyday work. For example, estimates of the 1-percent flood (which is exceeded, on average, once in 100 years and has a 1-percent chance of exceedance in any year) are used to create flood-plain maps that form the basis for setting insurance rates and land-use zoning. This and other streamflow statistics also are used for dam, bridge, and culvert design; water-supply planning and management; permitting of water withdrawals and wastewater and industrial discharges; hydropower facility design and regulation; and setting of minimum allowed streamflows to protect freshwater ecosystems. Streamflow statistics can be computed from available data at USGS streamgages depending on the type of data collected at the stations. Most often, however, streamflow statistics are needed at ungaged sites, where no streamflow data are available to determine the statistics.

  2. SELECTING SAGITTARIUS: IDENTIFICATION AND CHEMICAL CHARACTERIZATION OF THE SAGITTARIUS STREAM

    Energy Technology Data Exchange (ETDEWEB)

    Hyde, E. A. [University of Western Sydney, Locked Bag 1797, Penrith South DC, NSW 1797 (Australia); Keller, S. [Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2601 (Australia); Zucker, D. B. [Macquarie University, Physics and Astronomy, NSW 2109 (Australia); Ibata, R.; Siebert, A. [Observatoire astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l’Université, F-67000 Strasbourg (France); Lewis, G. F.; Conn, A. R. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, NSW 2006 (Australia); Penarrubia, J. [ROE, The University of Edinburgh, Institute for Astronomy, Edinburgh EH9 3HJ (United Kingdom); Irwin, M.; Gilmore, G. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Lane, R. R. [Departamento de Astronomía Universidad de Concepción, Casilla 160 C, Concepción (Chile); Koch, A. [Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, D-69117 Heidelberg (Germany); Diakogiannis, F. I. [International Center for Radio Astronomy Research, University of Western Australia, 35 Stirling Highway, Crawley, WA 6009 (Australia); Martell, S., E-mail: E.Hyde@uws.edu.au [Department of Astrophysics, School of Physics, University of New South Wales, Sydney, NSW 2052 (Australia)

    2015-06-01

    Wrapping around the Milky Way, the Sagittarius stream is the dominant substructure in the halo. Our statistical selection method has allowed us to identify 106 highly likely members of the Sagittarius stream. Spectroscopic analysis of metallicity and kinematics of all members provides us with a new mapping of the Sagittarius stream. We find correspondence between the velocity distribution of stream stars and those computed for a triaxial model of the Milky Way dark matter halo. The Sagittarius trailing arm exhibits a metallicity gradient, ranging from −0.59 to −0.97 dex over 142°. This is consistent with the scenario of tidal disruption from a progenitor dwarf galaxy that possessed an internal metallicity gradient. We note high metallicity dispersion in the leading arm, causing a lack of detectable gradient and possibly indicating orbital phase mixing. We additionally report on a potential detection of the Sextans dwarf spheroidal in our data.

  3. Shifting stream planform state decreases stream productivity yet increases riparian animal production

    Science.gov (United States)

    Venarsky, Michael P.; Walters, David M.; Hall, Robert O.; Livers, Bridget; Wohl, Ellen

    2018-01-01

    In the Colorado Front Range (USA), disturbance history dictates stream planform. Undisturbed, old-growth streams have multiple channels and large amounts of wood and depositional habitat. Disturbed streams (wildfires and logging tested how these opposing stream states influenced organic matter, benthic macroinvertebrate secondary production, emerging aquatic insect flux, and riparian spider biomass. Organic matter and macroinvertebrate production did not differ among sites per unit area (m−2), but values were 2 ×–21 × higher in undisturbed reaches per unit of stream valley (m−1 valley) because total stream area was higher in undisturbed reaches. Insect emergence was similar among streams at the per unit area and per unit of stream valley. However, rescaling insect emergence to per meter of stream bank showed that the emerging insect biomass reaching the stream bank was lower in undisturbed sites because multi-channel reaches had 3 × more stream bank than single-channel reaches. Riparian spider biomass followed the same pattern as emerging aquatic insects, and we attribute this to bottom-up limitation caused by the multi-channeled undisturbed sites diluting prey quantity (emerging insects) reaching the stream bank (riparian spider habitat). These results show that historic landscape disturbances continue to influence stream and riparian communities in the Colorado Front Range. However, these legacy effects are only weakly influencing habitat-specific function and instead are primarily influencing stream–riparian community productivity by dictating both stream planform (total stream area, total stream bank length) and the proportional distribution of specific habitat types (pools vs riffles).

  4. Quantifying Forested Riparian Buffer Ability to Ameliorate Stream Temperature in a Missouri Ozark Border Stream of the Central U.S

    Science.gov (United States)

    Bulliner, E. A.; Hubbart, J. A.

    2009-12-01

    Riparian buffers play an important role in modulating stream water quality, including temperature. There is a need to better understand riparian form and function to validate and improve contemporary management practices. Further studies are warranted to characterize energy attenuation by forested riparian canopy layers that normally buffer stream temperature, particularly in the central hardwood forest regions of the United States where relationships between canopy density and stream temperature are unknown. To quantify these complex processes, two intensively instrumented hydroclimate stations were installed along two stream reaches of a riparian stream in central Missouri, USA in the winter of 2008. Hydroclimate stations are located along stream reaches oriented in both cardinal directions, which will allow interpolation of results to other orientations. Each station consists of an array of instrumentation that senses the flux of water and energy into and out of the riparian zone. Reference data are supplied from a nearby flux tower (US DOE) located on top of a forested ridge. The study sites are located within a University of Missouri preserved wildland area on the border of the southern Missouri’s Ozark region, an ecologically distinct region in the central United States. Limestone underlies the study area, resulting in a distinct semi-Karst hydrologic system. Vegetation forms a complex, multi-layered canopy extending from the stream edge through the riparian zone and into surrounding hills. Climate is classified as humid continental, with approximate average annual temperature and precipitation of 13.2°C and 970mm, respectively. Preliminary results (summer 2009 data) indicate incoming short-wave radiation is 24.9% higher at the N-S oriented stream reach relative to the E-W oriented reach. Maximum incoming short wave radiation during the period was 64.5% lower at the N-S reach relative to E-W reach. Average air temperature for the E-W reach was 0.3°C lower

  5. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    Science.gov (United States)

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  6. Shifting stream planform state decreases stream productivity yet increases riparian animal production

    Science.gov (United States)

    Venarsky, Michael P.; Walters, David M.; Hall, Robert O.; Livers, Bridget; Wohl, Ellen

    2018-01-01

    In the Colorado Front Range (USA), disturbance history dictates stream planform. Undisturbed, old-growth streams have multiple channels and large amounts of wood and depositional habitat. Disturbed streams (wildfires and logging production, emerging aquatic insect flux, and riparian spider biomass. Organic matter and macroinvertebrate production did not differ among sites per unit area (m−2), but values were 2 ×–21 × higher in undisturbed reaches per unit of stream valley (m−1 valley) because total stream area was higher in undisturbed reaches. Insect emergence was similar among streams at the per unit area and per unit of stream valley. However, rescaling insect emergence to per meter of stream bank showed that the emerging insect biomass reaching the stream bank was lower in undisturbed sites because multi-channel reaches had 3 × more stream bank than single-channel reaches. Riparian spider biomass followed the same pattern as emerging aquatic insects, and we attribute this to bottom-up limitation caused by the multi-channeled undisturbed sites diluting prey quantity (emerging insects) reaching the stream bank (riparian spider habitat). These results show that historic landscape disturbances continue to influence stream and riparian communities in the Colorado Front Range. However, these legacy effects are only weakly influencing habitat-specific function and instead are primarily influencing stream–riparian community productivity by dictating both stream planform (total stream area, total stream bank length) and the proportional distribution of specific habitat types (pools vs riffles).

  7. LHCb : The LHCb Turbo stream

    CERN Multimedia

    Puig Navarro, Albert

    2015-01-01

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction. In the "turbo stream" the trigger will write out a compact summary of "physics" objects containing all information necessary for analyses, and this will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissioned and developed during...

  8. ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    I Made Oka Widyantara

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG-DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  9. Multiwall carbon nanotube microcavity arrays

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Rajib; Butt, Haider, E-mail: h.butt@bham.ac.uk [Nanotechnology Laboratory, School of Mechanical Engineering, University of Birmingham, Birmingham B15 2TT (United Kingdom); Rifat, Ahmmed A. [Integrated Lightwave Research Group, Department of Electrical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603 (Malaysia); Yetisen, Ali K.; Yun, Seok Hyun [Harvard Medical School and Wellman Center for Photomedicine, Massachusetts General Hospital, 65 Landsdowne Street, Cambridge, Massachusetts 02139 (United States); Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Dai, Qing [National Center for Nanoscience and Technology, Beijing 100190 (China)

    2016-03-21

    Periodic highly dense multi-wall carbon nanotube (MWCNT) arrays can act as photonic materials exhibiting band gaps in the visible regime and beyond terahertz range. MWCNT arrays in square arrangement for nanoscale lattice constants can be configured as a microcavity with predictable resonance frequencies. Here, computational analyses of compact square microcavities (≈0.8 × 0.8 μm{sup 2}) in MWCNT arrays were demonstrated to obtain enhanced quality factors (≈170–180) and narrow-band resonance peaks. Cavity resonances were rationally designed and optimized (nanotube geometry and cavity size) with finite element method. Series (1 × 2 and 1 × 3) and parallel (2 × 1 and 3 × 1) combinations of microcavities were modeled and resonance modes were analyzed. Higher order MWCNT microcavities showed enhanced resonance modes, which were red shifted with increasing Q-factors. Parallel microcavity geometries were also optimized to obtain narrow-band tunable filtering in low-loss communication windows (810, 1336, and 1558 nm). Compact series and parallel MWCNT microcavity arrays may have applications in optical filters and miniaturized optical communication devices.

  10. Improved streaming analysis technique: spherical harmonics expansion of albedo data

    International Nuclear Information System (INIS)

    Albert, T.E.; Simmons, G.L.

    1979-01-01

    An improved albedo scattering technique was implemented with a three-dimensional Monte Carlo transport code for use in analyzing radiation streaming problems. The improvement was based on a shifted spherical Harmonics expansion of the doubly differential albedo data base. The result of the improvement was a factor of 3 to 10 reduction in data storage requirements and approximately a factor of 3 to 6 increase in computational speed. Comparisons of results obtained using the technique with measurements are shown for neutron streaming in one- and two-legged square concrete ducts

  11. Many - body simulations using an array processor

    International Nuclear Information System (INIS)

    Rapaport, D.C.

    1985-01-01

    Simulations of microscopic models of water and polypeptides using molecular dynamics and Monte Carlo techniques have been carried out with the aid of an FPS array processor. The computational techniques are discussed, with emphasis on the development and optimization of the software to take account of the special features of the processor. The computing requirements of these simulations exceed what could be reasonably carried out on a normal 'scientific' computer. While the FPS processor is highly suited to the kinds of models described, several other computationally intensive problems in statistical mechanics are outlined for which alternative processor architectures are more appropriate

  12. Array capabilities and future arrays

    International Nuclear Information System (INIS)

    Radford, D.

    1993-01-01

    Early results from the new third-generation instruments GAMMASPHERE and EUROGAM are confirming the expectation that such arrays will have a revolutionary effect on the field of high-spin nuclear structure. When completed, GAMMASHPERE will have a resolving power am order of magnitude greater that of the best second-generation arrays. When combined with other instruments such as particle-detector arrays and fragment mass analysers, the capabilites of the arrays for the study of more exotic nuclei will be further enhanced. In order to better understand the limitations of these instruments, and to design improved future detector systems, it is important to have some intelligible and reliable calculation for the relative resolving power of different instrument designs. The derivation of such a figure of merit will be briefly presented, and the relative sensitivities of arrays currently proposed or under construction presented. The design of TRIGAM, a new third-generation array proposed for Chalk River, will also be discussed. It is instructive to consider how far arrays of Compton-suppressed Ge detectors could be taken. For example, it will be shown that an idealised open-quote perfectclose quotes third-generation array of 1000 detectors has a sensitivity an order of magnitude higher again than that of GAMMASPHERE. Less conventional options for new arrays will also be explored

  13. Piezo-Phototronic Enhanced UV Sensing Based on a Nanowire Photodetector Array.

    Science.gov (United States)

    Han, Xun; Du, Weiming; Yu, Ruomeng; Pan, Caofeng; Wang, Zhong Lin

    2015-12-22

    A large array of Schottky UV photodetectors (PDs) based on vertical aligned ZnO nanowires is achieved. By introducing the piezo-phototronic effect, the performance of the PD array is enhanced up to seven times in photoreponsivity, six times in sensitivity, and 2.8 times in detection limit. The UV PD array may have applications in optoelectronic systems, adaptive optical computing, and communication. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Petascale Computational Systems

    OpenAIRE

    Bell, Gordon; Gray, Jim; Szalay, Alex

    2007-01-01

    Computational science is changing to be data intensive. Super-Computers must be balanced systems; not just CPU farms but also petascale IO and networking arrays. Anyone building CyberInfrastructure should allocate resources to support a balanced Tier-1 through Tier-3 design.

  15. Optimum SNR data compression in hardware using an Eigencoil array.

    Science.gov (United States)

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  16. Akamai Streaming

    OpenAIRE

    ECT Team, Purdue

    2007-01-01

    Akamai offers world-class streaming media services that enable Internet content providers and enterprises to succeed in today's Web-centric marketplace. They deliver live event Webcasts (complete with video production, encoding, and signal acquisition services), streaming media on demand, 24/7 Webcasts and a variety of streaming application services based upon their EdgeAdvantage.

  17. Analysis of the separation of protium from blanket tritium-product streams

    International Nuclear Information System (INIS)

    Misra, B.; Maroni, V.A.

    1981-07-01

    The case is considered in which the blanket product stream has been purified to the point where only protium, tritium, and a small quantity of deuterium remain. A cryogenic distillation cascade concept developed specifically to handle this enrichment problem is shown. The concept is based on a series of distillation columns and equilibrators capable of producing a protium-rich stream containing less than 1000 appm T and a tritium-rich stream containing less than 2000 appm H. It is envisioned that both of these streams could be blended with streams of comparable composition in the mainstream position of the fuel cycle without further processing. The computational analysis of the cascade was based on a fixed arrangement of columns and equilibrators and a fixed number of theoretical plates per columns, since these features are less easily varied in an actual system than reflux ratios and flow rates. In order to test the flexibility of this conceptual enruchment system to adjust to variations of the H/T ratio in the feed, H/T values of 0.333, 1.00, and 3.00 were investigated

  18. Image Encryption Using a Lightweight Stream Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Saeed Bahrami

    2012-01-01

    Full Text Available Security of the multimedia data including image and video is one of the basic requirements for the telecommunications and computer networks. In this paper, we consider a simple and lightweight stream encryption algorithm for image encryption, and a series of tests are performed to confirm suitability of the described encryption algorithm. These tests include visual test, histogram analysis, information entropy, encryption quality, correlation analysis, differential analysis, and performance analysis. Based on this analysis, it can be concluded that the present algorithm in comparison to A5/1 and W7 stream ciphers has the same security level, is better in terms of the speed of performance, and is used for real-time applications.

  19. Pascal-SC a computer language for scientific computation

    CERN Document Server

    Bohlender, Gerd; von Gudenberg, Jürgen Wolff; Rheinboldt, Werner; Siewiorek, Daniel

    1987-01-01

    Perspectives in Computing, Vol. 17: Pascal-SC: A Computer Language for Scientific Computation focuses on the application of Pascal-SC, a programming language developed as an extension of standard Pascal, in scientific computation. The publication first elaborates on the introduction to Pascal-SC, a review of standard Pascal, and real floating-point arithmetic. Discussions focus on optimal scalar product, standard functions, real expressions, program structure, simple extensions, real floating-point arithmetic, vector and matrix arithmetic, and dynamic arrays. The text then examines functions a

  20. Theoretical models of Kapton heating in solar array geometries

    Science.gov (United States)

    Morton, Thomas L.

    1992-01-01

    In an effort to understand pyrolysis of Kapton in solar arrays, a computational heat transfer program was developed. This model allows for the different materials and widely divergent length scales of the problem. The present status of the calculation indicates that thin copper traces surrounded by Kapton and carrying large currents can show large temperature increases, but the other configurations seen on solar arrays have adequate heat sinks to prevent substantial heating of the Kapton. Electron currents from the ambient plasma can also contribute to heating of thin traces. Since Kapton is stable at temperatures as high as 600 C, this indicates that it should be suitable for solar array applications. There are indications that the adhesive sued in solar arrays may be a strong contributor to the pyrolysis problem seen in solar array vacuum chamber tests.

  1. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/..mu..Ci-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult.

  2. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/μCi-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult

  3. The influence of waves on the tidal kinetic energy resource at a tidal stream energy site

    International Nuclear Information System (INIS)

    Guillou, Nicolas; Chapalain, Georges; Neill, Simon P.

    2016-01-01

    Highlights: • We model the influence of waves on tidal kinetic energy in the Fromveur Strait. • Numerical results are compared with field data of waves and currents. • The introduction of waves improve predictions of tidal stream power during storm. • Mean spring tidal stream potential is reduced by 12% during extreme wave conditions. • Potential is reduced by 7.8% with waves forces and 5.3% with enhanced friction. - Abstract: Successful deployment of tidal energy converters relies on access to accurate and high resolution numerical assessments of available tidal stream power. However, since suitable tidal stream sites are located in relatively shallow waters of the continental shelf where tidal currents are enhanced, tidal energy converters may experience effects of wind-generated surface-gravity waves. Waves may thus influence tidal currents, and associated kinetic energy, through two non-linear processes: the interaction of wave and current bottom boundary layers, and the generation of wave-induced currents. Here, we develop a three-dimensional tidal circulation model coupled with a phase-averaged wave model to quantify the impact of the waves on the tidal kinetic energy resource of the Fromveur Strait (western Brittany) - a region that has been identified with strong potential for tidal array development. Numerical results are compared with in situ observations of wave parameters (significant wave height, peak period and mean wave direction) and current amplitude and direction 10 m above the seabed (the assumed technology hub height for this region). The introduction of waves is found to improve predictions of tidal stream power at 10 m above the seabed at the measurement site in the Strait, reducing kinetic energy by up to 9% during storm conditions. Synoptic effects of wave radiation stresses and enhanced bottom friction are more specifically identified at the scale of the Strait. Waves contribute to a slight increase in the spatial gradient of

  4. The Midwest Stream Quality Assessment—Influences of human activities on streams

    Science.gov (United States)

    Van Metre, Peter C.; Mahler, Barbara J.; Carlisle, Daren M.; Coles, James F.

    2018-04-16

    Healthy streams and the fish and other organisms that live in them contribute to our quality of life. Extensive modification of the landscape in the Midwestern United States, however, has profoundly affected the condition of streams. Row crops and pavement have replaced grasslands and woodlands, streams have been straightened, and wetlands and fields have been drained. Runoff from agricultural and urban land brings sediment and chemicals to streams. What is the chemical, physical, and biological condition of Midwestern streams? Which physical and chemical stressors are adversely affecting biological communities, what are their origins, and how might we lessen or avoid their adverse effects?In 2013, the U.S. Geological Survey (USGS) conducted the Midwest Stream Quality Assessment to evaluate how human activities affect the biological condition of Midwestern streams. In collaboration with the U.S. Environmental Protection Agency National Rivers and Streams Assessment, the USGS sampled 100 streams, chosen to be representative of the different types of watersheds in the region. Biological condition was evaluated based on the number and diversity of fish, algae, and invertebrates in the streams. Changes to the physical habitat and chemical characteristics of the streams—“stressors”—were assessed, and their relation to landscape factors and biological condition was explored by using mathematical models. The data and models help us to better understand how the human activities on the landscape are affecting streams in the region.

  5. innovation in radioactive waste water-stream management

    International Nuclear Information System (INIS)

    Shaaban, D.A.E.F.

    2010-01-01

    treatment of radioactive waste dtreams is receiving considereble attention in most countries. the present work is for the radioactive wastewater stream management, by volume reduction by a mutual heating and humidificaction of a compressed dry air introduced through the wastewater. in the present work, a mathematical model describing the volume reduction by at the optimum operating condition is determined. a set of coupled first order differential equations, obtained through the mass and energy conservations laws, are used to obtain the humidity ratio, water diffused to the air stream, water temperature, and humid air stream temperature distributions through the bubbling column. these coupled differential equations are simulataneously solved numerically by the developed computer program using fourth order rung-kutta method. the results obtained, according to the present mathematical model, revealed that the air bubble state variables such as mass transfer coefficient (K G ) and interfacial area (a) have a strong effect on the process. therefore, the behavior of the air bubble state variables with coulmn height can be predicted and optimized. moreover, the design curves of the volumetric reduction of the wastewater streams are obtained and assessed at the different operating conditions. an experimental setup was constructed to verify the suggested model. comperhensive comparison between suggested model results, recent experimental measurements and the results of previous work was carried out

  6. Self-adaptive change detection in streaming data with non-stationary distribution

    KAUST Repository

    Zhang, Xiangliang; Wang, Wei

    2010-01-01

    Non-stationary distribution, in which the data distribution evolves over time, is a common issue in many application fields, e.g., intrusion detection and grid computing. Detecting the changes in massive streaming data with a non

  7. Networked Rectenna Array for Smart Material Actuators

    Science.gov (United States)

    Choi, Sang H.; Golembiewski, Walter T.; Song, Kyo D.

    2000-01-01

    The concept of microwave-driven smart material actuators is envisioned as the best option to alleviate the complexity associated with hard-wired control circuitry. Networked rectenna patch array receives and converts microwave power into a DC power for an array of smart actuators. To use microwave power effectively, the concept of a power allocation and distribution (PAD) circuit is adopted for networking a rectenna/actuator patch array. The PAD circuit is imbedded into a single embodiment of rectenna and actuator array. The thin-film microcircuit embodiment of PAD circuit adds insignificant amount of rigidity to membrane flexibility. Preliminary design and fabrication of PAD circuitry that consists of a few nodal elements were made for laboratory testing. The networked actuators were tested to correlate the network coupling effect, power allocation and distribution, and response time. The features of preliminary design are 16-channel computer control of actuators by a PCI board and the compensator for a power failure or leakage of one or more rectennas.

  8. The metaphors we stream by: Making sense of music streaming

    OpenAIRE

    Hagen, Anja Nylund

    2016-01-01

    In Norway music-streaming services have become mainstream in everyday music listening. This paper examines how 12 heavy streaming users make sense of their experiences with Spotify and WiMP Music (now Tidal). The analysis relies on a mixed-method qualitative study, combining music-diary self-reports, online observation of streaming accounts, Facebook and last.fm scrobble-logs, and in-depth interviews. By drawing on existing metaphors of Internet experiences we demonstrate that music-streaming...

  9. Correlates of elemental-isotopic composition of stream fishes: the importance of land-use, species identity and body size.

    Science.gov (United States)

    Montaña, C G; Schalk, C M

    2018-04-01

    The isotopic (δ 13 C and δ 15 N) and stoichiometric (C:N:P) compositions of four fish species (Family Centrarchidae: Lepomis auritus, Lepomis cyanellus; Family Cyprinidae: Nocomis leptocephalus, Semotilus atromaculatus) were examined across four North Carolina Piedmont streams arrayed along an urbanization gradient. Both isotopic and stoichiometric composition of fishes appeared to track changes occurring in basal resource availability. Values of δ 13 C of basal resources and consumers were more enriched at the most urbanized streams. Similarly, basal resources and consumers were δ 15 N-enriched at more urbanized streams. Basal resource stoichiometry varied across streams, with periphyton being the most variable. Primary consumers stoichiometry also differed across streams. Intraspecific variation in fish stoichiometry correlated with the degree of urbanization, as the two cyprinids had higher N content and L. cyanellus had higher P content in more urbanized streams, probably due to enrichment of basal resources. Intrinsic factors, specifically species identity and body size also affected stoichiometric variation. Phosphorus (P) content increased significantly with body size in centrarchids, but not in cyprinids. These results suggest that although species identity and body size are important predictors of elemental stoichiometry, the complex nature of altered urban streams may yield imbalances in the elemental composition of consumers via their food resources. © 2018 The Fisheries Society of the British Isles.

  10. Sampling phased array a new technique for signal processing and ultrasonic imaging

    OpenAIRE

    Bulavinov, A.; Joneit, D.; Kröning, M.; Bernus, L.; Dalichow, M.H.; Reddy, K.M.

    2006-01-01

    Different signal processing and image reconstruction techniques are applied in ultrasonic non-destructive material evaluation. In recent years, rapid development in the fields of microelectronics and computer engineering lead to wide application of phased array systems. A new phased array technique, called "Sampling Phased Array" has been developed in Fraunhofer Institute for non-destructive testing. It realizes unique approach of measurement and processing of ultrasonic signals. The sampling...

  11. Phased Array Radar Network Experiment for Severe Weather

    Science.gov (United States)

    Ushio, T.; Kikuchi, H.; Mega, T.; Yoshikawa, E.; Mizutani, F.; Takahashi, N.

    2017-12-01

    Phased Array Weather Radar (PAWR) was firstly developed in 2012 by Osaka University and Toshiba under a grant of NICT using the Digital Beamforming Technique, and showed a impressive thunderstorm behavior with 30 second resolution. After that development, second PAWR was installed in Kobe city about 60 km away from the first PAWR site, and Tokyo Metropolitan University, Osaka Univeristy, Toshiba and the Osaka Local Government started a new project to develop the Osaka Urban Demonstration Network. The main sensor of the Osaka Network is a 2-node Phased Array Radar Network and lightning location system. Data products that are created both in local high performance computer and Toshiba Computer Cloud, include single and multi-radar data, vector wind, quantitative precipitation estimation, VIL, nowcasting, lightning location and analysis. Each radar node is calibarated by the baloon measurement and through the comparison with the GPM (Global Precipitation Measurement)/ DPR (Dual Frequency Space borne Radar) within 1 dB. The attenuated radar reflectivities obtained by the Phased Array Radar Network at X band are corrected based on the bayesian scheme proposed in Shimamura et al. [2016]. The obtained high resolution (every 30 seconds/ 100 elevation angles) 3D reflectivity and rain rate fields are used to nowcast the surface rain rate up to 30 minutes ahead. These new products are transferred to Osaka Local Government in operational mode and evaluated by several section in Osaka Prefecture. Furthermore, a new Phased Array Radar with polarimetric function has been developed in 2017, and will be operated in the fiscal year of 2017. In this presentation, Phased Array Radar, network architecuture, processing algorithm, evalution of the social experiment and first Multi-Prameter Phased Array Radar experiment are presented.

  12. GSTARS computer models and their applications, part I: theoretical development

    Science.gov (United States)

    Yang, C.T.; Simoes, F.J.M.

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  13. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  14. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N J; Schölkopf, B

    2012-01-01

    We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135

  15. Sensitivity analysis of a pulse nutrient addition technique for estimating nutrient uptake in large streams

    Science.gov (United States)

    Laurence Lin; J.R. Webster

    2012-01-01

    The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...

  16. Low-flow characteristics of Virginia streams

    Science.gov (United States)

    Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute

    2011-01-01

    Low-flow annual non-exceedance probabilities (ANEP), called probability-percent chance (P-percent chance) flow estimates, regional regression equations, and transfer methods are provided describing the low-flow characteristics of Virginia streams. Statistical methods are used to evaluate streamflow data. Analysis of Virginia streamflow data collected from 1895 through 2007 is summarized. Methods are provided for estimating low-flow characteristics of gaged and ungaged streams. The 1-, 4-, 7-, and 30-day average streamgaging station low-flow characteristics for 290 long-term, continuous-record, streamgaging stations are determined, adjusted for instances of zero flow using a conditional probability adjustment method, and presented for non-exceedance probabilities of 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.05, 0.02, 0.01, and 0.005. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression equations to estimate annual non-exceedance probabilities at gaged and ungaged sites and are summarized for 290 long-term, continuous-record streamgaging stations, 136 short-term, continuous-record streamgaging stations, and 613 partial-record streamgaging stations. Regional regression equations for six physiographic regions use basin characteristics to estimate 1-, 4-, 7-, and 30-day average low-flow annual non-exceedance probabilities at gaged and ungaged sites. Weighted low-flow values that combine computed streamgaging station low-flow characteristics and annual non-exceedance probabilities from regional regression equations provide improved low-flow estimates. Regression equations developed using the Maintenance of Variance with Extension (MOVE.1) method describe the line of organic correlation (LOC) with an appropriate index site for low-flow characteristics at 136 short-term, continuous-record streamgaging stations and 613 partial-record streamgaging stations. Monthly

  17. Performance study of monochromatic synchrotron X-ray computed tomography using a linear array detector

    Energy Technology Data Exchange (ETDEWEB)

    Kazama, Masahiro; Takeda, Tohoru; Itai, Yuji [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine; Akiba, Masahiro; Yuasa, Tetsuya; Hyodo, Kazuyuki; Ando, Masami; Akatsuka, Takao

    1997-09-01

    Monochromatic x-ray computed tomography (CT) using synchrotron radiation (SR) is being developed for detection of non-radioactive contrast materials at low concentration for application in clinical diagnosis. A new SR-CT system with improved contrast resolution, was constructed using a linear array detector which provides wide dynamic ranges and a double monochromator. The performance of this system was evaluated in a phantom and a rat model of brain ischemia. This system consists of a silicon (111) double crystal monochromator, an x-ray shutter, an ionization chamber, x-ray slits, a scanning table for the target organ, and an x-ray linear array detector. The research was carried out at the BLNE-5A bending magnet beam line of the Tristan Accumulation Ring in KEK, Japan. In this experiment, the reconstructed image of the spatial-resolution phantom clearly showed the 1 mm holes. At 1 mm slice thickness, the above K-edge image of the phantom showed contrast resolution at the concentration of 200 {mu}g/ml iodine-based contrast materials whereas the K-edge energy subtraction image showed contrast resolution at the concentration of 500 {mu}g/ml contrast materials. The cerebral arteries filled with iodine microspheres were clearly revealed, and the ischemic regions at the right temporal lobe and frontal lobe were depicted as non-vascular regions. The measured minimal detectable concentration of iodine on the above K-edge image is about 6 times higher than the expected value of 35.3 {mu}g/ml because of the high dark current of this detector. Thus, the use of a CCD detector which is cooled by liquid nitrogen to improve the dynamic range of the detector, is being under construction. (author)

  18. Discussion paper for a highly parallel array processor-based machine

    International Nuclear Information System (INIS)

    Hagstrom, R.; Bolotin, G.; Dawson, J.

    1984-01-01

    The architectural plant for a quickly realizable implementation of a highly parallel special-purpose computer system with peak performance in the range of 6 billion floating point operations per second is discussed. The architecture is suitable to Lattice Gauge theoretical computations of fundamental physics interest and may be applicable to a range of other problems which deal with numerically intensive computational problems. The plan is quickly realizable because it employs a maximum of commercially available hardware subsystems and because the architecture is software-transparent to the individual processors, allowing straightforward re-use of whatever commercially available operating-systems and support software that is suitable to run on the commercially-produced processors. A tiny prototype instrument, designed along this architecture has already operated. A few elementary examples of programs which can run efficiently are presented. The large machine which the authors would propose to build would be based upon a highly competent array-processor, the ST-100 Array Processor, and specific design possibilities are discussed. The first step toward realizing this plan practically is to install a single ST-100 to allow algorithm development to proceed while a demonstration unit is built using two of the ST-100 Array Processors

  19. Seismic Background Noise Analysis of BRTR (PS-43) Array

    Science.gov (United States)

    Ezgi Bakir, Mahmure; Meral Ozel, Nurcan; Umut Semin, Korhan

    2015-04-01

    The seismic background noise variation of BRTR array, composed of two sub arrays located in Ankara and in Ankara-Keskin, has been investigated by calculating Power Spectral Density and Probability Density Functions for seasonal and diurnal noise variations between 2005 and 2011. PSDs were computed within the frequency range of 100 s - 10 Hz. The results show us a little change in noise conditions in terms of time and location. Especially, noise level changes were observed at 3-5 Hz in diurnal variations at Keskin array and there is a 5-7 dB difference in day and night time in cultural noise band (1-10 Hz). On the other hand, noise levels of medium period array is high in 1-2 Hz frequency rather than short period array. High noise levels were observed in daily working times when we compare it to night-time in cultural noise band. The seasonal background noise variation at both sites also shows very similar properties to each other. Since these stations are borehole instruments and away from the coasts, we saw a small change in noise levels caused by microseism. Comparison between Keskin short period array and Ankara medium period array show us Keskin array is quiter than Ankara array.

  20. 32 x 16 CMOS smart pixel array for optical interconnects

    Science.gov (United States)

    Kim, Jongwoo; Guilfoyle, Peter S.; Stone, Richard V.; Hessenbruch, John M.; Choquette, Kent D.; Kiamilev, Fouad E.

    2000-05-01

    Free space optical interconnects can increase throughput capacities and eliminate much of the energy consumption required for `all electronic' systems. High speed optical interconnects can be achieved by integrating optoelectronic devices with conventional electronics. Smart pixel arrays have been developed which use optical interconnects. An individual smart pixel cell is composed of a vertical cavity surface emitting laser (VCSEL), a photodetector, an optical receiver, a laser driver, and digital logic circuitry. Oxide-confined VCSELs are being developed to operate at 850 nm with a threshold current of approximately 1 mA. Multiple quantum well photodetectors are being fabricated from AlGaAs for use with the 850 nm VCSELs. The VCSELs and photodetectors are being integrated with complementary metal oxide semiconductor (CMOS) circuitry using flip-chip bonding. CMOS circuitry is being integrated with a 32 X 16 smart pixel array. The 512 smart pixels are serially linked. Thus, an entire data stream may be clocked through the chip and output electrically by the last pixel. Electrical testing is being performed on the CMOS smart pixel array. Using an on-chip pseudo random number generator, a digital data sequence was cycled through the chip verifying operation of the digital circuitry. Although, the prototype chip was fabricated in 1.2 micrometers technology, simulations have demonstrated that the array can operate at 1 Gb/s per pixel using 0.5 micrometers technology.

  1. A Fast Tool for Assessing the Power Performance of Large WEC arrays

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé

    In the present work, a tool for computing wave energy converter array hydrodynamic forces and power performance is developed. The tool leads to a significant reduction on computation time compared with standard boundary element method based codes while keeping similar levels of accuracy. This mak...... it suitable for array layout optimization, where large numbers of simulations are required. Furthermore, the tool is developed within an open-source environment such as Python 2.7 so that it is fully accessible to anyone willing to make use of it....

  2. Characterizing Milky Way Tidal Streams and Dark Matter with MilkyWay@home

    Science.gov (United States)

    Newberg, Heidi Jo; Shelton, Siddhartha; Weiss, Jake

    2018-01-01

    MilkyWay@home is a 0.5 PetaFLOPS volunteer computing platform that is mapping out the density substructure of the Sagittarius Dwarf Tidal Stream, the so-called bifurcated portion of the Sagittarius Stream, and the Virgo Overdensity, using turnoff stars from the Sloan Digital Sky Survey. It is also using the density of stars along tidal streams such as the Orphan Stream to constrain properties of the dwarf galaxy progenitor of this stream, including the dark matter portion. Both of these programs are enabled by a specially-built optimization package that uses differential evolution or particle swarm methods to find the optimal model parameters to fit a set of data. To fit the density of tidal streams, 20 parameters are simultaneously fit to each 2.5-degree-wide stripe of SDSS data. Five parameters describing the stellar and dark matter profile of the Orphan Stream progenitor and the time that the dwarf galaxy has been evolved through the Galactic potential are used in an n-body simulation that is then fit to observations of the Orphan Stream. New results from MilkyWay@home will be presented. This project was supported by NSF grant AST 16-15688, the NASA/NY Space Grant fellowship, and contributions made by The Marvin Clan, Babette Josephs, Manit Limlamai, and the 2015 Crowd Funding Campaign to Support Milky Way Research.

  3. Acoustic Source Localization via Subspace Based Method Using Small Aperture MEMS Arrays

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2014-01-01

    Full Text Available Small aperture microphone arrays provide many advantages for portable devices and hearing aid equipment. In this paper, a subspace based localization method is proposed for acoustic source using small aperture arrays. The effects of array aperture on localization are analyzed by using array response (array manifold. Besides array aperture, the frequency of acoustic source and the variance of signal power are simulated to demonstrate how to optimize localization performance, which is carried out by introducing frequency error with the proposed method. The proposed method for 5 mm array aperture is validated by simulations and experiments with MEMS microphone arrays. Different types of acoustic sources can be localized with the highest precision of 6 degrees even in the presence of wind noise and other noises. Furthermore, the proposed method reduces the computational complexity compared with other methods.

  4. Enhancing Network Data Obliviousness in Trusted Execution Environment-based Stream Processing Systems

    KAUST Repository

    Alsibyani, Hassan M.

    2018-05-15

    Cloud computing usage is increasing and a common concern is the privacy and security of the data and computation. Third party cloud environments are not considered fit for processing private information because the data will be revealed to the cloud provider. However, Trusted Execution Environments (TEEs), such as Intel SGX, provide a way for applications to run privately and securely on untrusted platforms. Nonetheless, using a TEE by itself for stream processing systems is not sufficient since network communication patterns may leak properties of the data under processing. This work addresses leaky topology structures and suggests mitigation techniques for each of these. We create specific metrics to evaluate leaks occurring from the network patterns; the metrics measure information leaked when the stream processing system is running. We consider routing techniques for inter-stage communication in a streaming application to mitigate this data leakage. We consider a dynamic policy to change the mitigation technique depending on how much information is currently leaking. Additionally, we consider techniques to hide irregularities resulting from a filtering stage in a topology. We also consider leakages resulting from applications containing cycles. For each of the techniques, we explore their effectiveness in terms of the advantage they provide in overcoming the network leakage. The techniques are tested partly using simulations and some were implemented in a prototype SGX-based stream processing system.

  5. Dynamic array of dark optical traps

    DEFF Research Database (Denmark)

    Daria, V.R.; Rodrigo, P.J.; Glückstad, J.

    2004-01-01

    A dynamic array of dark optical traps is generated for simultaneous trapping and arbitrary manipulation of multiple low-index microstructures. The dynamic intensity patterns forming the dark optical trap arrays are generated using a nearly loss-less phase-to-intensity conversion of a phase......-encoded coherent light source. Two-dimensional input phase distributions corresponding to the trapping patterns are encoded using a computer-programmable spatial light modulator, enabling each trap to be shaped and moved arbitrarily within the plane of observation. We demonstrate the generation of multiple dark...... optical traps for simultaneous manipulation of hollow "air-filled" glass microspheres suspended in an aqueous medium. (C) 2004 American Institute of Physics....

  6. A real-time remote video streaming platform for ultrasound imaging.

    Science.gov (United States)

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  7. A Stream Tilling Approach to Surface Area Estimation for Large Scale Spatial Data in a Shared Memory System

    Directory of Open Access Journals (Sweden)

    Liu Jiping

    2017-12-01

    Full Text Available Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.

  8. Parallel-Bit Stream for Securing Iris Recognition

    OpenAIRE

    Elsayed Mostafa; Maher Mansour; Heba Saad

    2012-01-01

    Biometrics-based authentication schemes have usability advantages over traditional password-based authentication schemes. However, biometrics raises several privacy concerns, it has disadvantages comparing to traditional password in which it is not secured and non revocable. In this paper, we propose a fast method for securing revocable iris template using parallel-bit stream watermarking to overcome these problems. Experimental results prove that the proposed method has low computation time ...

  9. Simulating pad-electrodes with high-definition arrays in transcranial electric stimulation

    Science.gov (United States)

    Kempe, René; Huang, Yu; Parra, Lucas C.

    2014-04-01

    Objective. Research studies on transcranial electric stimulation, including direct current, often use a computational model to provide guidance on the placing of sponge-electrode pads. However, the expertise and computational resources needed for finite element modeling (FEM) make modeling impractical in a clinical setting. Our objective is to make the exploration of different electrode configurations accessible to practitioners. We provide an efficient tool to estimate current distributions for arbitrary pad configurations while obviating the need for complex simulation software. Approach. To efficiently estimate current distributions for arbitrary pad configurations we propose to simulate pads with an array of high-definition (HD) electrodes and use an efficient linear superposition to then quickly evaluate different electrode configurations. Main results. Numerical results on ten different pad configurations on a normal individual show that electric field intensity simulated with the sampled array deviates from the solutions with pads by only 5% and the locations of peak magnitude fields have a 94% overlap when using a dense array of 336 electrodes. Significance. Computationally intensive FEM modeling of the HD array needs to be performed only once, perhaps on a set of standard heads that can be made available to multiple users. The present results confirm that by using these models one can now quickly and accurately explore and select pad-electrode montages to match a particular clinical need.

  10. On the wake structure in streaming complex plasmas

    International Nuclear Information System (INIS)

    Ludwig, Patrick; Kählert, Hanno; Bonitz, Michael; Miloch, Wojciech J

    2012-01-01

    The theoretical description of complex (dusty) plasmas requires multiscale concepts that adequately incorporate the correlated interplay of streaming electrons and ions, neutrals and dust grains. Knowing the effective dust-dust interaction, the multiscale problem can be effectively reduced to a one-component plasma model of the dust subsystem. The goal of this paper is a systematic evaluation of the electrostatic potential distribution around a dust grain in the presence of a streaming plasma environment by means of two complementary approaches: (i) a high-precision computation of the dynamically screened Coulomb potential from the dynamic dielectric function and (ii) full 3D particle-in-cell simulations, which self-consistently include dynamical grain charging and nonlinear effects. The range of applicability of these two approaches is addressed. (paper)

  11. Array processors: an introduction to their architecture, software, and applications in nuclear medicine

    International Nuclear Information System (INIS)

    King, M.A.; Doherty, P.W.; Rosenberg, R.J.; Cool, S.L.

    1983-01-01

    Array processors are ''number crunchers'' that dramatically enhance the processing power of nuclear medicine computer systems for applicatons dealing with the repetitive operations involved in digital image processing of large segments of data. The general architecture and the programming of array processors are introduced, along with some applications of array processors to the reconstruction of emission tomographic images, digital image enhancement, and functional image formation

  12. Numerical Simulations and Experimental Measurements of Scale-Model Horizontal Axis Hydrokinetic Turbines (HAHT) Arrays

    Science.gov (United States)

    Javaherchi, Teymour; Stelzenmuller, Nick; Seydel, Joseph; Aliseda, Alberto

    2014-11-01

    The performance, turbulent wake evolution and interaction of multiple Horizontal Axis Hydrokinetic Turbines (HAHT) is analyzed in a 45:1 scale model setup. We combine experimental measurements with different RANS-based computational simulations that model the turbines with sliding-mesh, rotating reference frame and blame element theory strategies. The influence of array spacing and Tip Speed Ratio on performance and wake velocity structure is investigated in three different array configurations: Two coaxial turbines at different downstream spacing (5d to 14d), Three coaxial turbines with 5d and 7d downstream spacing, and Three turbines with lateral offset (0.5d) and downstream spacing (5d & 7d). Comparison with experimental measurements provides insights into the dynamics of HAHT arrays, and by extension to closely packed HAWT arrays. The experimental validation process also highlights the influence of the closure model used (k- ω SST and k- ɛ) and the flow Reynolds number (Re=40,000 to 100,000) on the computational predictions of devices' performance and characteristics of the flow field inside the above-mentioned arrays, establishing the strengths and limitations of existing numerical models for use in industrially-relevant settings (computational cost and time). Supported by DOE through the National Northwest Marine Renewable Energy Center (NNMREC).

  13. A fast density-based clustering algorithm for real-time Internet of Things stream.

    Science.gov (United States)

    Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.

  14. Radiation shielding techniques and applications. 3. Analysis of Photon Streaming Through and Around Shield Doors

    International Nuclear Information System (INIS)

    Barnett, Marvin; Hack, Joe; Nathan, Steve; White, Travis

    2001-01-01

    Westinghouse Safety Management Solutions (Westinghouse SMS) has been tasked with providing radiological engineering design support for the new Commercial Light Water Reactor Tritium Extraction Facility (CLWR-TEF) being constructed at the Savannah River Site (SRS). The Remote Handling Building (RHB) of the CLWR-TEF will act as the receiving facility for irradiated targets used in the production of tritium for the U.S. Department of Energy (DOE). Because of the high dose rates, approaching 50 000 rads/h (500 Gy/h) from the irradiated target bundles, significant attention has been made to shielding structures within the facility. One aspect of the design that has undergone intense review is the shield doors. The RHB has six shield doors that needed to be studied with respect to photon streaming. Several aspects had to be examined to ensure that the design meets the radiation dose levels. Both the thickness and streaming issues around the door edges were designed and examined. Photon streaming through and around a shield door is a complicated problem, creating a reliance on computer modeling to perform the analyses. The computer code typically used by the Westinghouse SMS in the evaluation of photon transport through complex geometries is the MCNP Monte Carlo computer code. The complexity of the geometry within the problem can cause problems even with the Monte Carlo codes. Striking a balance between how the code handles transport through the shield door with transport through the streaming paths, particularly with the use of typical variance reduction methods, is difficult when trying to ensure that all important regions of the model are sampled appropriately. The thickness determination used a simple variance reduction technique. In construction, the shield door will not be flush against the wall, so a solid rectangular slab leaves streaming paths around the edges. Administrative controls could be used to control dose to workers; however, 10 CFR 835.1001 states

  15. StreamCat

    Data.gov (United States)

    U.S. Environmental Protection Agency — The StreamCat Dataset provides summaries of natural and anthropogenic landscape features for ~2.65 million streams, and their associated catchments, within the...

  16. Stream Crossings

    Data.gov (United States)

    Vermont Center for Geographic Information — Physical measurements and attributes of stream crossing structures and adjacent stream reaches which are used to provide a relative rating of aquatic organism...

  17. Optical Interconnection Via Computer-Generated Holograms

    Science.gov (United States)

    Liu, Hua-Kuang; Zhou, Shaomin

    1995-01-01

    Method of free-space optical interconnection developed for data-processing applications like parallel optical computing, neural-network computing, and switching in optical communication networks. In method, multiple optical connections between multiple sources of light in one array and multiple photodetectors in another array made via computer-generated holograms in electrically addressed spatial light modulators (ESLMs). Offers potential advantages of massive parallelism, high space-bandwidth product, high time-bandwidth product, low power consumption, low cross talk, and low time skew. Also offers advantage of programmability with flexibility of reconfiguration, including variation of strengths of optical connections in real time.

  18. Layout Study and Application of Mobile App Recommendation Approach Based On Spark Streaming Framework

    Science.gov (United States)

    Wang, H. T.; Chen, T. T.; Yan, C.; Pan, H.

    2018-05-01

    For App recommended areas of mobile phone software, made while using conduct App application recommended combined weighted Slope One algorithm collaborative filtering algorithm items based on further improvement of the traditional collaborative filtering algorithm in cold start, data matrix sparseness and other issues, will recommend Spark stasis parallel algorithm platform, the introduction of real-time streaming streaming real-time computing framework to improve real-time software applications recommended.

  19. Three-dimensional digital imaging based on shifted point-array encoding.

    Science.gov (United States)

    Tian, Jindong; Peng, Xiang

    2005-09-10

    An approach to three-dimensional (3D) imaging based on shifted point-array encoding is presented. A kind of point-array structure light is projected sequentially onto the reference plane and onto the object surface to be tested and thus forms a pair of point-array images. A mathematical model is established to formulize the imaging process with the pair of point arrays. This formulation allows for a description of the relationship between the range image of the object surface and the lateral displacement of each point in the point-array image. Based on this model, one can reconstruct each 3D range image point by computing the lateral displacement of the corresponding point on the two point-array images. The encoded point array can be shifted digitally along both the lateral and the longitudinal directions step by step to achieve high spatial resolution. Experimental results show good agreement with the theoretical predictions. This method is applicable for implementing 3D imaging of object surfaces with complex topology or large height discontinuities.

  20. Efficient processing of two-dimensional arrays with C or C++

    Science.gov (United States)

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  1. Prioritized Contact Transport Stream

    Science.gov (United States)

    Hunt, Walter Lee, Jr. (Inventor)

    2015-01-01

    A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters.

  2. Re-Meandering of Lowland Streams

    DEFF Research Database (Denmark)

    Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai

    2014-01-01

    We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats...... and macroinvertebrate communities of restored streams would resemble those of natural streams, while those of the channelized streams would differ from both restored and near-natural streams. Physical habitats were surveyed for substrate composition, depth, width and current velocity. Macroinvertebrates were sampled...... along 100 m reaches in each stream, in edge habitats and in riffle/run habitats located in the center of the stream. Restoration significantly altered the physical conditions and affected the interactions between stream habitat heterogeneity and macroinvertebrate diversity. The substrate in the restored...

  3. Image processing with cellular nonlinear networks implemented on field-programmable gate arrays for real-time applications in nuclear fusion

    International Nuclear Information System (INIS)

    Palazzo, S.; Vagliasindi, G.; Arena, P.; Murari, A.; Mazon, D.; De Maack, A.

    2010-01-01

    In the past years cameras have become increasingly common tools in scientific applications. They are now quite systematically used in magnetic confinement fusion, to the point that infrared imaging is starting to be used systematically for real-time machine protection in major devices. However, in order to guarantee that the control system can always react rapidly in case of critical situations, the time required for the processing of the images must be as predictable as possible. The approach described in this paper combines the new computational paradigm of cellular nonlinear networks (CNNs) with field-programmable gate arrays and has been tested in an application for the detection of hot spots on the plasma facing components in JET. The developed system is able to perform real-time hot spot recognition, by processing the image stream captured by JET wide angle infrared camera, with the guarantee that computational time is constant and deterministic. The statistical results obtained from a quite extensive set of examples show that this solution approximates very well an ad hoc serial software algorithm, with no false or missed alarms and an almost perfect overlapping of alarm intervals. The computational time can be reduced to a millisecond time scale for 8 bit 496x560-sized images. Moreover, in our implementation, the computational time, besides being deterministic, is practically independent of the number of iterations performed by the CNN - unlike software CNN implementations.

  4. StreamExplorer: A Multi-Stage System for Visually Exploring Events in Social Streams.

    Science.gov (United States)

    Wu, Yingcai; Chen, Zhutian; Sun, Guodao; Xie, Xiao; Cao, Nan; Liu, Shixia; Cui, Weiwei

    2017-10-18

    Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present StreamExplorer to facilitate the visual analysis, tracking, and comparison of a social stream at three levels. At a macroscopic level, StreamExplorer uses a new glyph-based timeline visualization, which presents a quick multi-faceted overview of the ebb and flow of a social stream. At a mesoscopic level, a map visualization is employed to visually summarize the social stream from either a topical or geographical aspect. At a microscopic level, users can employ interactive lenses to visually examine and explore the social stream from different perspectives. Two case studies and a task-based evaluation are used to demonstrate the effectiveness and usefulness of StreamExplorer.Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present Stream

  5. Parallel computation for distributed parameter system-from vector processors to Adena computer

    Energy Technology Data Exchange (ETDEWEB)

    Nogi, T

    1983-04-01

    Research on advanced parallel hardware and software architectures for very high-speed computation deserves and needs more support and attention to fulfil its promise. Novel architectures for parallel processing are being made ready. Architectures for parallel processing can be roughly divided into two groups. One is a vector processor in which a single central processing unit involves multiple vector-arithmetic registers. The other is a processor array in which slave processors are connected to a host processor to perform parallel computation. In this review, the concept and data structure of the Adena (alternating-direction edition nexus array) architecture, which is conformable to distributed-parameter simulation algorithms, are described. 5 references.

  6. A non-permutation flowshop scheduling problem with lot streaming: A Mathematical model

    Directory of Open Access Journals (Sweden)

    Daniel Rossit

    2016-06-01

    Full Text Available In this paper we investigate the use of lot streaming in non-permutation flowshop scheduling problems. The objective is to minimize the makespan subject to the standard flowshop constraints, but where it is now permitted to reorder jobs between machines. In addition, the jobs can be divided into manageable sublots, a strategy known as lot streaming. Computational experiments show that lot streaming reduces the makespan up to 43% for a wide range of instances when compared to the case in which no job splitting is applied. The benefits grow as the number of stages in the production process increases but reach a limit. Beyond a certain point, the division of jobs into additional sublots does not improve the solution.

  7. A Macintosh based data system for array spectrometers (Poster)

    Science.gov (United States)

    Bregman, J.; Moss, N.

    An interactive data aquisition and reduction system has been assembled by combining a Macintosh computer with an instrument controller (an Apple II computer) via an RS-232 interface. The data system provides flexibility for operating different linear array spectrometers. The standard Macintosh interface is used to provide ease of operation and to allow transferring the reduced data to commercial graphics software.

  8. Radioactive contamination of fishes in lake and streams impacted by the Fukushima nuclear power plant accident

    International Nuclear Information System (INIS)

    Yoshimura, Mayumi; Yokoduka, Tetsuya

    2014-01-01

    The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident in March 2011 emitted radioactive substances into the environment, contaminating a wide array of organisms including fishes. We found higher concentrations of radioactive cesium ( 137 Cs) in brown trout (Salmo trutta) than in rainbow trout (Oncorhynchus nerka), and 137 Cs concentrations in brown trout were higher in a lake than in a stream. Our analyses indicated that these differences were primarily due to differences in diet, but that habitat also had an effect. Radiocesium concentrations ( 137 Cs) in stream charr (Salvelinus leucomaenis) were higher in regions with more concentrated aerial activity and in older fish. These results were also attributed to dietary and habitat differences. Preserving uncontaminated areas by remediating soils and releasing uncontaminated fish would help restore this popular fishing area but would require a significant effort, followed by a waiting period to allow activity concentrations to fall below the threshold limits for consumption. - Highlight: • Concentration of 137 Cs in brown trout was higher than in rainbow trout. • 137 Cs concentration of brown trout in a lake was higher than in a stream. • 137 Cs concentration of stream charr was higher in region with higher aerial activity. • Concentration of 137 Cs in stream charr was higher in older fish. • Difference of contamination among fishes was due to difference in diet and habitat

  9. Beamforming with a circular array of microphones mounted on a rigid sphere (L)

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn; Fernandez Grande, Efren

    2011-01-01

    Beamforming with uniform circular microphone arrays can be used for localizing sound sources over 360. Typically, the array microphones are suspended in free space or they are mounted on a solid cylinder. However, the cylinder is often considered to be infinitely long because the scattering problem...... has no exact solution for a finite cylinder. Alternatively one can use a solid sphere. This investigation compares the performance of a circular array mounded on a rigid sphere with that of such an array in free space and mounted on an infinite cylinder, using computer simulations. The examined...

  10. Ring-array processor distribution topology for optical interconnects

    Science.gov (United States)

    Li, Yao; Ha, Berlin; Wang, Ting; Wang, Sunyu; Katz, A.; Lu, X. J.; Kanterakis, E.

    1992-01-01

    The existing linear and rectangular processor distribution topologies for optical interconnects, although promising in many respects, cannot solve problems such as clock skews, the lack of supporting elements for efficient optical implementation, etc. The use of a ring-array processor distribution topology, however, can overcome these problems. Here, a study of the ring-array topology is conducted with an aim of implementing various fast clock rate, high-performance, compact optical networks for digital electronic multiprocessor computers. Practical design issues are addressed. Some proof-of-principle experimental results are included.

  11. Self-adaptive change detection in streaming data with non-stationary distribution

    KAUST Repository

    Zhang, Xiangliang

    2010-01-01

    Non-stationary distribution, in which the data distribution evolves over time, is a common issue in many application fields, e.g., intrusion detection and grid computing. Detecting the changes in massive streaming data with a non-stationary distribution helps to alarm the anomalies, to clean the noises, and to report the new patterns. In this paper, we employ a novel approach for detecting changes in streaming data with the purpose of improving the quality of modeling the data streams. Through observing the outliers, this approach of change detection uses a weighted standard deviation to monitor the evolution of the distribution of data streams. A cumulative statistical test, Page-Hinkley, is employed to collect the evidence of changes in distribution. The parameter used for reporting the changes is self-adaptively adjusted according to the distribution of data streams, rather than set by a fixed empirical value. The self-adaptability of the novel approach enhances the effectiveness of modeling data streams by timely catching the changes of distributions. We validated the approach on an online clustering framework with a benchmark KDDcup 1999 intrusion detection data set as well as with a real-world grid data set. The validation results demonstrate its better performance on achieving higher accuracy and lower percentage of outliers comparing to the other change detection approaches. © 2010 Springer-Verlag.

  12. Hybrid Information Flow Analysis for Programs with Arrays

    Directory of Open Access Journals (Sweden)

    Gergö Barany

    2016-07-01

    Full Text Available Information flow analysis checks whether certain pieces of (confidential data may affect the results of computations in unwanted ways and thus leak information. Dynamic information flow analysis adds instrumentation code to the target software to track flows at run time and raise alarms if a flow policy is violated; hybrid analyses combine this with preliminary static analysis. Using a subset of C as the target language, we extend previous work on hybrid information flow analysis that handled pointers to scalars. Our extended formulation handles arrays, pointers to array elements, and pointer arithmetic. Information flow through arrays of pointers is tracked precisely while arrays of non-pointer types are summarized efficiently. A prototype of our approach is implemented using the Frama-C program analysis and transformation framework. Work on a full machine-checked proof of the correctness of our approach using Isabelle/HOL is well underway; we present the existing parts and sketch the rest of the correctness argument.

  13. Design and reliability analysis of high-speed and continuous data recording system based on disk array

    Science.gov (United States)

    Jiang, Changlong; Ma, Cheng; He, Ning; Zhang, Xugang; Wang, Chongyang; Jia, Huibo

    2002-12-01

    In many real-time fields the sustained high-speed data recording system is required. This paper proposes a high-speed and sustained data recording system based on the complex-RAID 3+0. The system consists of Array Controller Module (ACM), String Controller Module (SCM) and Main Controller Module (MCM). ACM implemented by an FPGA chip is used to split the high-speed incoming data stream into several lower-speed streams and generate one parity code stream synchronously. It also can inversely recover the original data stream while reading. SCMs record lower-speed streams from the ACM into the SCSI disk drivers. In the SCM, the dual-page buffer technology is adopted to implement speed-matching function and satisfy the need of sustainable recording. MCM monitors the whole system, controls ACM and SCMs to realize the data stripping, reconstruction, and recovery functions. The method of how to determine the system scale is presented. At the end, two new ways Floating Parity Group (FPG) and full 2D-Parity Group (full 2D-PG) are proposed to improve the system reliability and compared with the Traditional Parity Group (TPG). This recording system can be used conveniently in many areas of data recording, storing, playback and remote backup with its high-reliability.

  14. Parallel Access of Out-Of-Core Dense Extendible Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Rotem, Doron

    2007-07-26

    Datasets used in scientific and engineering applications are often modeled as dense multi-dimensional arrays. For very large datasets, the corresponding array models are typically stored out-of-core as array files. The array elements are mapped onto linear consecutive locations that correspond to the linear ordering of the multi-dimensional indices. Two conventional mappings used are the row-major order and the column-major order of multi-dimensional arrays. Such conventional mappings of dense array files highly limit the performance of applications and the extendibility of the dataset. Firstly, an array file that is organized in say row-major order causes applications that subsequently access the data in column-major order, to have abysmal performance. Secondly, any subsequent expansion of the array file is limited to only one dimension. Expansions of such out-of-core conventional arrays along arbitrary dimensions, require storage reorganization that can be very expensive. Wepresent a solution for storing out-of-core dense extendible arrays that resolve the two limitations. The method uses a mapping function F*(), together with information maintained in axial vectors, to compute the linear address of an extendible array element when passed its k-dimensional index. We also give the inverse function, F-1*() for deriving the k-dimensional index when given the linear address. We show how the mapping function, in combination with MPI-IO and a parallel file system, allows for the growth of the extendible array without reorganization and no significant performance degradation of applications accessing elements in any desired order. We give methods for reading and writing sub-arrays into and out of parallel applications that run on a cluster of workstations. The axial-vectors are replicated and maintained in each node that accesses sub-array elements.

  15. Phononic thermal resistance due to a finite periodic array of nano-scatterers

    Energy Technology Data Exchange (ETDEWEB)

    Trang Nghiêm, T. T.; Chapuis, Pierre-Olivier [Univ. Lyon, CNRS, INSA-Lyon, Université Claude Bernard Lyon 1, CETHIL UMR5008, F-69621 Villeurbanne (France)

    2016-07-28

    The wave property of phonons is employed to explore the thermal transport across a finite periodic array of nano-scatterers such as circular and triangular holes. As thermal phonons are generated in all directions, we study their transmission through a single array for both normal and oblique incidences, using a linear dispersionless time-dependent acoustic frame in a two-dimensional system. Roughness effects can be directly considered within the computations without relying on approximate analytical formulae. Analysis by spatio-temporal Fourier transform allows us to observe the diffraction effects and the conversion of polarization. Frequency-dependent energy transmission coefficients are computed for symmetric and asymmetric objects that are both subject to reciprocity. We demonstrate that the phononic array acts as an efficient thermal barrier by applying the theory of thermal boundary (Kapitza) resistances to arrays of smooth scattering holes in silicon for an exemplifying periodicity of 10 nm in the 5–100 K temperature range. It is observed that the associated thermal conductance has the same temperature dependence as that without phononic filtering.

  16. Linear array implementation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance

  17. A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging

    Science.gov (United States)

    Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc

    2015-06-01

    High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.

  18. Computationally assisted screening and design of cell-interactive peptides by a cell-based assay using peptide arrays and a fuzzy neural network algorithm.

    Science.gov (United States)

    Kaga, Chiaki; Okochi, Mina; Tomita, Yasuyuki; Kato, Ryuji; Honda, Hiroyuki

    2008-03-01

    We developed a method of effective peptide screening that combines experiments and computational analysis. The method is based on the concept that screening efficiency can be enhanced from even limited data by use of a model derived from computational analysis that serves as a guide to screening and combining the model with subsequent repeated experiments. Here we focus on cell-adhesion peptides as a model application of this peptide-screening strategy. Cell-adhesion peptides were screened by use of a cell-based assay of a peptide array. Starting with the screening data obtained from a limited, random 5-mer library (643 sequences), a rule regarding structural characteristics of cell-adhesion peptides was extracted by fuzzy neural network (FNN) analysis. According to this rule, peptides with unfavored residues in certain positions that led to inefficient binding were eliminated from the random sequences. In the restricted, second random library (273 sequences), the yield of cell-adhesion peptides having an adhesion rate more than 1.5-fold to that of the basal array support was significantly high (31%) compared with the unrestricted random library (20%). In the restricted third library (50 sequences), the yield of cell-adhesion peptides increased to 84%. We conclude that a repeated cycle of experiments screening limited numbers of peptides can be assisted by the rule-extracting feature of FNN.

  19. Inventory of miscellaneous streams

    International Nuclear Information System (INIS)

    Lueck, K.J.

    1995-09-01

    On December 23, 1991, the US Department of Energy, Richland Operations Office (RL) and the Washington State Department of Ecology (Ecology) agreed to adhere to the provisions of the Department of Ecology Consent Order. The Consent Order lists the regulatory milestones for liquid effluent streams at the Hanford Site to comply with the permitting requirements of Washington Administrative Code. The RL provided the US Congress a Plan and Schedule to discontinue disposal of contaminated liquid effluent into the soil column on the Hanford Site. The plan and schedule document contained a strategy for the implementation of alternative treatment and disposal systems. This strategy included prioritizing the streams into two phases. The Phase 1 streams were considered to be higher priority than the Phase 2 streams. The actions recommended for the Phase 1 and 2 streams in the two reports were incorporated in the Hanford Federal Facility Agreement and Consent Order. Miscellaneous Streams are those liquid effluents streams identified within the Consent Order that are discharged to the ground but are not categorized as Phase 1 or Phase 2 Streams. This document consists of an inventory of the liquid effluent streams being discharged into the Hanford soil column

  20. Progressive Conversion from B-rep to BSP for Streaming Geometric Modeling.

    Science.gov (United States)

    Bajaj, Chandrajit; Paoluzzi, Alberto; Scorzelli, Giorgio

    2006-01-01

    We introduce a novel progressive approach to generate a Binary Space Partition (BSP) tree and a convex cell decomposition for any input triangles boundary representation (B-rep), by utilizing a fast calculation of the surface inertia. We also generate a solid model at progressive levels of detail. This approach relies on a variation of standard BSP tree generation, allowing for labeling cells as in, out and fuzzy, and which permits a comprehensive representation of a solid as the Hasse diagram of a cell complex. Our new algorithm is embedded in a streaming computational framework, using four types of dataflow processes that continuously produce, transform, combine or consume subsets of cells depending on their number or input/output stream. A varied collection of geometric modeling techniques are integrated in this streaming framework, including polygonal, spline, solid and heterogeneous modeling with boundary and decompositive representations, Boolean set operations, Cartesian products and adaptive refinement. The real-time B-rep to BSP streaming results we report in this paper are a large step forward in the ultimate unification of rapid conceptual and detailed shape design methodologies.

  1. Experimental and numerical study of a flapping tidal stream generator

    Science.gov (United States)

    Kim, Jihoon; Le, Tuyen Quang; Ko, Jin Hwan; Sitorus, Patar Ebenezer; Tambunan, Indra Hartarto; Kang, Taesam

    2017-11-01

    The tidal stream turbine is one of the systems that extract kinetic energy from tidal stream, and there are several types of the tidal stream turbine depending on its operating motion. In this research, we conduct experimental and consecutive numerical analyses of a flapping tidal stream generator with a dual configuration flappers. An experimental analysis of a small-scale prototype is conducted in a towing tank, and a numerical analysis is conducted using two-dimensional computational fluid dynamics simulations with an in-house code. Through an experimental analysis conducted while varying these factors, a high applied load and a high input arm angle were found to be advantageous. In consecutive numerical investigations with the kinematics selected from the experiments, it was found that a rear-swing flapper contributes to the total amount of power more than a front-swing flapper with a distance of two times the chord length and with a 90-degree phase difference between the two. This research was a part of the project titled `R&D center for underwater construction robotics', funded by the Ministry of Oceans and Fisheries(MOF), Korea Institute of Marine Science & Technology Promotion(KIMST,PJT200539), and Pohang City in Korea.

  2. Time-Based Data Streams: Fundamental Concepts for a Data Resource for Streams

    Energy Technology Data Exchange (ETDEWEB)

    Beth A. Plale

    2009-10-10

    Real time data, which we call data streams, are readings from instruments, environmental, bodily or building sensors that are generated at regular intervals and often, due to their volume, need to be processed in real time. Often a single pass is all that can be made on the data, and a decision to discard or keep the instance is made on the spot. Too, the stream is for all practical purposes indefinite, so decisions must be made on incomplete knowledge. This notion of data streams has a different set of issues from a file, for instance, that is byte streamed to a reader. The file is finite, so the byte stream is becomes a processing convenience more than a fundamentally different kind of data. Through the duration of the project we examined three aspects of streaming data: the first, techniques to handle streaming data in a distributed system organized as a collection of web services, the second, the notion of the dashboard and real time controllable analysis constructs in the context of the Fermi Tevatron Beam Position Monitor, and third and finally, we examined provenance collection of stream processing such as might occur as raw observational data flows from the source and undergoes correction, cleaning, and quality control. The impact of this work is severalfold. We were one of the first to advocate that streams had little value unless aggregated, and that notion is now gaining general acceptance. We were one of the first groups to grapple with the notion of provenance of stream data also.

  3. Studies on coaxial circular array for underwater transducer applications

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    of the coaxial array from the next stage of investigation during which a hybrid formulation is developed to provide a computationally efficient method of calculating impedance. Different sidelobe suppression techniques including uniform and nonuniform excitations...

  4. Asteroid/meteorite streams

    Science.gov (United States)

    Drummond, J.

    The independent discovery of the same three streams (named alpha, beta, and gamma) among 139 Earth approaching asteroids and among 89 meteorite producing fireballs presents the possibility of matching specific meteorites to specific asteroids, or at least to asteroids in the same stream and, therefore, presumably of the same composition. Although perhaps of limited practical value, the three meteorites with known orbits are all ordinary chondrites. To identify, in general, the taxonomic type of the parent asteroid, however, would be of great scientific interest since these most abundant meteorite types cannot be unambiguously spectrally matched to an asteroid type. The H5 Pribram meteorite and asteroid 4486 (unclassified) are not part of a stream, but travel in fairly similar orbits. The LL5 Innisfree meteorite is orbitally similar to asteroid 1989DA (unclassified), and both are members of a fourth stream (delta) defined by five meteorite-dropping fireballs and this one asteroid. The H5 Lost City meteorite is orbitally similar to 1980AA (S type), which is a member of stream gamma defined by four asteroids and four fireballs. Another asteroid in this stream is classified as an S type, another is QU, and the fourth is unclassified. This stream suggests that ordinary chondrites should be associated with S (and/or Q) asteroids. Two of the known four V type asteroids belong to another stream, beta, defined by five asteroids and four meteorite-dropping (but unrecovered) fireballs, making it the most probable source of the eucrites. The final stream, alpha, defined by five asteroids and three fireballs is of unknown composition since no meteorites have been recovered and only one asteroid has an ambiguous classification of QRS. If this stream, or any other as yet undiscovered ones, were found to be composed of a more practical material (e.g., water or metalrich), then recovery of the associated meteorites would provide an opportunity for in-hand analysis of a potential

  5. Reprogrammable logic in memristive crossbar for in-memory computing

    Science.gov (United States)

    Cheng, Long; Zhang, Mei-Yun; Li, Yi; Zhou, Ya-Xiong; Wang, Zhuo-Rui; Hu, Si-Yu; Long, Shi-Bing; Liu, Ming; Miao, Xiang-Shui

    2017-12-01

    Memristive stateful logic has emerged as a promising next-generation in-memory computing paradigm to address escalating computing-performance pressures in traditional von Neumann architecture. Here, we present a nonvolatile reprogrammable logic method that can process data between different rows and columns in a memristive crossbar array based on material implication (IMP) logic. Arbitrary Boolean logic can be executed with a reprogrammable cell containing four memristors in a crossbar array. In the fabricated Ti/HfO2/W memristive array, some fundamental functions, such as universal NAND logic and data transfer, were experimentally implemented. Moreover, using eight memristors in a 2  ×  4 array, a one-bit full adder was theoretically designed and verified by simulation to exhibit the feasibility of our method to accomplish complex computing tasks. In addition, some critical logic-related performances were further discussed, such as the flexibility of data processing, cascading problem and bit error rate. Such a method could be a step forward in developing IMP-based memristive nonvolatile logic for large-scale in-memory computing architecture.

  6. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  7. Control characteristics of cryogenic distillation column with a feedback stream for fusion reactor

    International Nuclear Information System (INIS)

    Yamanishi, Toshihiko; Okuno, Kenji

    1997-01-01

    The control characteristics of the cryogenic distillation column with a feedback stream have been discussed based on computer simulation results. This column plays an important role in fusion reactor. A new control system was proposed from the simulation results. The flow rate of top product is determined from the composition and flow rate of a main feed stream by a feedforward control loop. The flow rates of the feedback stream and vapor stream within the column are proportionally changed with a corresponding change of feed flow rate. The flow rate of vapor stream within the column is further adjusted to maintain product purity by a feedback control loop. The proposed system can control the product purity for a large fluctuation of feed composition, a change of feed flow rate, and an increase or decrease of the number of total theoretical stages of the column. The control system should be designed for each column by considering its operating conditions and function. The present study gives us a basic procedure for the design method of the control system of the cryogenic distillation column. (author)

  8. A Fast Density-Based Clustering Algorithm for Real-Time Internet of Things Stream

    Science.gov (United States)

    Ying Wah, Teh

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753

  9. Jet arrays in supersonic crossflow — An experimental study

    Science.gov (United States)

    Ali, Mohd Yousuf; Alvi, Farrukh

    2015-12-01

    Jet injection into a supersonic crossflow is a classical fluid dynamics problem with many engineering applications. Several experimental and numerical studies have been taken up to analyze the interaction of a single jet with the incoming crossflow. However, there is a dearth of the literature on the interaction of multiple jets with one another and with the crossflow. Jets in a supersonic crossflow are known to produce a three-dimensional bow-shock structure due to the blockage of the flow. Multiple jets in a streamwise linear array interact with both one another and the incoming supersonic flow. In this paper, a parametric study is carried out to analyze the effect of microjet (sub-mm diameter) injection in a Mach 1.5 supersonic crossflow using flow visualization and velocity field measurements. The variation of the microjet orifice diameter and spacing within an array is used to study the three-dimensional nature of the flow field around the jets. The strength of the microjet-generated shock, scaling of the shock wave angle with the momentum coefficient, averaged streamwise, spanwise, and cross-stream velocity fields, and microjet array trajectories are detailed in the paper. It was found that shock angles of the microjet-generated shocks scale with the momentum coefficient for the three actuator configurations tested. As the microjets issue in the crossflow, a pair of longitudinal counter-rotating vortices (CVPs) are formed. The vortex pairs remain coherent for arrays with larger spanwise spacing between the micro-orifices and exhibit significant three-dimensionality similar to that of a single jet in crossflow. As the spacing between the jets is reduced, the CVPs merge resulting in a more two-dimensional flow field. The bow shock resulting from microjet injection also becomes nearly two-dimensional as the spacing between the micro-orifices is reduced. Trajectory estimations yield that microjets in an array have similar penetration as single jets. A notional

  10. Aeroacoustics of Three-Stream Jets

    Science.gov (United States)

    Henderson, Brenda S.

    2012-01-01

    Results from acoustic measurements of noise radiated from a heated, three-stream, co-annular exhaust system operated at subsonic conditions are presented. The experiments were conducted for a range of core, bypass, and tertiary stream temperatures and pressures. The nozzle system had a fan-to-core area ratio of 2.92 and a tertiary-to-core area ratio of 0.96. The impact of introducing a third stream on the radiated noise for third-stream velocities below that of the bypass stream was to reduce high frequency noise levels at broadside and peak jet-noise angles. Mid-frequency noise radiation at aft observation angles was impacted by the conditions of the third stream. The core velocity had the greatest impact on peak noise levels and the bypass-to-core mass flow ratio had a slight impact on levels in the peak jet-noise direction. The third-stream jet conditions had no impact on peak noise levels. Introduction of a third jet stream in the presence of a simulated forward-flight stream limits the impact of the third stream on radiated noise. For equivalent ideal thrust conditions, two-stream and three-stream jets can produce similar acoustic spectra although high-frequency noise levels tend to be lower for the three-stream jet.

  11. Academic Self-Concepts in Ability Streams: Considering Domain Specificity and Same-Stream Peers

    Science.gov (United States)

    Liem, Gregory Arief D.; McInerney, Dennis M.; Yeung, Alexander S.

    2015-01-01

    The study examined the relations between academic achievement and self-concepts in a sample of 1,067 seventh-grade students from 3 core ability streams in Singapore secondary education. Although between-stream differences in achievement were large, between-stream differences in academic self-concepts were negligible. Within each stream, levels of…

  12. Solar wind stream interfaces

    International Nuclear Information System (INIS)

    Gosling, J.T.; Asbridge, J.R.; Bame, S.J.; Feldman, W.C.

    1978-01-01

    Measurements aboard Imp 6, 7, and 8 reveal that approximately one third of all high-speed solar wind streams observed at 1 AU contain a sharp boundary (of thickness less than approx.4 x 10 4 km) near their leading edge, called a stream interface, which separates plasma of distinctly different properties and origins. Identified as discontinuities across which the density drops abruptly, the proton temperature increases abruptly, and the speed rises, stream interfaces are remarkably similar in character from one stream to the next. A superposed epoch analysis of plasma data has been performed for 23 discontinuous stream interfaces observed during the interval March 1971 through August 1974. Among the results of this analysis are the following: (1) a stream interface separates what was originally thick (i.e., dense) slow gas from what was originally thin (i.e., rare) fast gas; (2) the interface is the site of a discontinuous shear in the solar wind flow in a frame of reference corotating with the sun; (3) stream interfaces occur at speeds less than 450 km s - 1 and close to or at the maximum of the pressure ridge at the leading edges of high-speed streams; (4) a discontinuous rise by approx.40% in electron temperature occurs at the interface; and (5) discontinuous changes (usually rises) in alpha particle abundance and flow speed relative to the protons occur at the interface. Stream interfaces do not generally recur on successive solar rotations, even though the streams in which they are embedded often do. At distances beyond several astronomical units, stream interfaces should be bounded by forward-reverse shock pairs; three of four reverse shocks observed at 1 AU during 1971--1974 were preceded within approx.1 day by stream interfaces. Our observations suggest that many streams close to the sun are bounded on all sides by large radial velocity shears separating rapidly expanding plasma from more slowly expanding plasma

  13. Coulomb gap triptych in a periodic array of metal nanocrystals.

    Science.gov (United States)

    Chen, Tianran; Skinner, Brian; Shklovskii, B I

    2012-09-21

    The Coulomb gap in the single-particle density of states (DOS) is a universal consequence of electron-electron interaction in disordered systems with localized electron states. Here we show that in arrays of monodisperse metallic nanocrystals, there is not one but three identical adjacent Coulomb gaps, which together form a structure that we call a "Coulomb gap triptych." We calculate the DOS and the conductivity in two- and three-dimensional arrays using a computer simulation. Unlike in the conventional Coulomb glass models, in nanocrystal arrays the DOS has a fixed width in the limit of large disorder. The Coulomb gap triptych can be studied via tunneling experiments.

  14. Grid refinement model in lattice Boltzmann method for stream function-vorticity formulations

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Myung Seob [Dept. of Mechanical Engineering, Dongyang Mirae University, Seoul (Korea, Republic of)

    2015-03-15

    In this study, we present a grid refinement model in the lattice Boltzmann method (LBM) for two-dimensional incompressible fluid flow. That is, the model combines the desirable features of the lattice Boltzmann method and stream function-vorticity formulations. In order to obtain an accurate result, very fine grid (or lattice) is required near the solid boundary. Therefore, the grid refinement model is used in the lattice Boltzmann method for stream function-vorticity formulation. This approach is more efficient in that it can obtain the same accurate solution as that in single-block approach even if few lattices are used for computation. In order to validate the grid refinement approach for the stream function-vorticity formulation, the numerical simulations of lid-driven cavity flows were performed and good results were obtained.

  15. Stream systems.

    Science.gov (United States)

    Jack E. Williams; Gordon H. Reeves

    2006-01-01

    Restored, high-quality streams provide innumerable benefits to society. In the Pacific Northwest, high-quality stream habitat often is associated with an abundance of salmonid fishes such as chinook salmon (Oncorhynchus tshawytscha), coho salmon (O. kisutch), and steelhead (O. mykiss). Many other native...

  16. A fast, exact code for scattered thermal radiation compared with a two-stream approximation

    International Nuclear Information System (INIS)

    Cogley, A.C.; Pandey, D.K.

    1980-01-01

    A two-stream accuracy study for internally (thermal) driven problems is presented by comparison with a recently developed 'exact' adding/doubling method. The resulting errors in external (or boundary) radiative intensity and flux are usually larger than those for the externally driven problems and vary substantially with the radiative parameters. Error predictions for a specific problem are difficult. An unexpected result is that the exact method is computationally as fast as the two-stream approximation for nonisothermal media

  17. Studies of implosion processes of nested tungsten wire-array Z-pinch

    International Nuclear Information System (INIS)

    Ning Cheng; Ding Ning; Liu Quan; Yang Zhenhua

    2006-01-01

    Nested wire-array is a kind of promising structured-load because it can improve the quality of Z-pinch plasma and enhance the radiation power of X-ray source. Based on the zero-dimensional model, the assumption of wire-array collision, and the criterion of optimized load (maximal load kinetic energy), optimization of the typical nested wire-array as a load of Z machine at Sandia Laboratory was carried out. It was shown that the load has been basically optimized. The Z-pinch process of the typical load was numerically studied by means of one-dimensional three-temperature radiation magneto-hydrodynamics (RMHD) code. The obtained results reproduce the dynamic process of the Z-pinch and show the implosion trajectory of nested wire-array and the transfer process of drive current between the inner and outer array. The experimental and computational X-ray pulse was compared, and it was suggested that the assumption of wire-array collision was reasonable in nested wire-array Z-pinch at least for the current level of Z machine. (authors)

  18. Parametric analysis of ATM solar array.

    Science.gov (United States)

    Singh, B. K.; Adkisson, W. B.

    1973-01-01

    The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.

  19. PROXY-BASED PATCHING STREAM TRANSMISSION STRATEGY IN MOBILE STREAMING MEDIA SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Liao Jianxin; Lei Zhengxiong; Ma Xutao; Zhu Xiaomin

    2006-01-01

    A mobile transmission strategy, PMPatching (Proxy-based Mobile Patching) transmission strategy is proposed, it applies to the proxy-based mobile streaming media system in Wideband Code Division Multiple Access (WCDMA) network. Performance of the whole system can be improved by using patching stream to transmit anterior part of the suffix that had been played back, and by batching all the demands for the suffix arrived in prefix period and patching stream transmission threshold period. Experimental results show that this strategy can efficiently reduce average network transmission cost and number of channels consumed in central streaming media server.

  20. Design for real-time data acquisition based on streaming technology

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Kojima, Mamoru

    2001-04-01

    For the LHD project a long-pulse plasma experiment of one-hour duration is planned. In this quasi steady-state operation, the data acquisition system will be required to continuously transfer the diagnostic data from the digitizer front-end and display them in real-time. The Compact PCI standard is used to replace the conventional CAMAC digitizers in LHD, because it provides good functionality for real-time data streaming and also a connectivity with modern PC technology. The digitizer scheme, interface to the host computer, adoption of data compression, and downstream applications are discussed in detail to design and implement this new real-time data streaming system for LHD plasma diagnostics. (author)

  1. Principles of Adaptive Array Processing

    Science.gov (United States)

    2006-09-01

    ACE with and without tapering (homogeneous case). These analytical results are less suited to predict the detection performance of a real system ...Nickel: Adaptive Beamforming for Phased Array Radars. Proc. Int. Radar Symposium IRS’98 (Munich, Sept. 1998), DGON and VDE /ITG, pp. 897-906.(Reprint also...strategies for airborne radar. Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, CA, 1998, IEEE Cat.Nr. 0-7803-5148-7/98, pp. 1327-1331. [17

  2. X-ray source array

    International Nuclear Information System (INIS)

    Cooperstein, G.; Lanza, R.C.; Sohval, A.R.

    1983-01-01

    A circular array of cold cathode diode X-ray sources, for radiation imaging applications, such as computed tomography includes electrically conductive cathode plates each of which cooperates with at least two anodes to form at least two diode sources. In one arrangement, two annular cathodes are separated by radially extending, rod-like anodes. Field enhancement blades may be provided on the cathodes. In an alternative arrangement, the cathode plates extend radially and each pair is separated by an anode plate also extending radially. (author)

  3. Computer analysis to the geochemical interpretation of soil and stream sediment data in an area of Southern Uruguay

    International Nuclear Information System (INIS)

    Spangenberg, J.

    2010-01-01

    In southern Uruguay there are several known occurrences of base metal sulphide mineralization within an area of Precambrian volcanic sedimentary rocks. Regional geochemical stream sediment reconnaissance surveys revealed new polymetallic anomalies in the same stratigraphic zone. Geochemical interpretation of multi-element data from a soil and stream sediment survey carried out in one of these anomalous areas is presented.

  4. The Use of Smart Glasses for Surgical Video Streaming.

    Science.gov (United States)

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  5. Depth Images Filtering In Distributed Streaming

    Directory of Open Access Journals (Sweden)

    Dziubich Tomasz

    2016-04-01

    Full Text Available In this paper, we propose a distributed system for point cloud processing and transferring them via computer network regarding to effectiveness-related requirements. We discuss the comparison of point cloud filters focusing on their usage for streaming optimization. For the filtering step of the stream pipeline processing we evaluate four filters: Voxel Grid, Radial Outliner Remover, Statistical Outlier Removal and Pass Through. For each of the filters we perform a series of tests for evaluating the impact on the point cloud size and transmitting frequency (analysed for various fps ratio. We present results of the optimization process used for point cloud consolidation in a distributed environment. We describe the processing of the point clouds before and after the transmission. Pre- and post-processing allow the user to send the cloud via network without any delays. The proposed pre-processing compression of the cloud and the post-processing reconstruction of it are focused on assuring that the end-user application obtains the cloud with a given precision.

  6. Radioactive contamination of fishes in lake and streams impacted by the Fukushima nuclear power plant accident

    Energy Technology Data Exchange (ETDEWEB)

    Yoshimura, Mayumi, E-mail: yoshi887@ffpri.affrc.go.jp [Kansai Research Center, Forestry and Forest Products Research Institute, Nagaikyuutaro 68, Momoyama, Fushimi, Kyoto 612-0855 (Japan); Yokoduka, Tetsuya [Tochigi Prefectural Fisheries Experimental Station, Sarado 2599, Ohtawara, Tochigi 324-0404 (Japan)

    2014-06-01

    The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident in March 2011 emitted radioactive substances into the environment, contaminating a wide array of organisms including fishes. We found higher concentrations of radioactive cesium ({sup 137}Cs) in brown trout (Salmo trutta) than in rainbow trout (Oncorhynchus nerka), and {sup 137}Cs concentrations in brown trout were higher in a lake than in a stream. Our analyses indicated that these differences were primarily due to differences in diet, but that habitat also had an effect. Radiocesium concentrations ({sup 137}Cs) in stream charr (Salvelinus leucomaenis) were higher in regions with more concentrated aerial activity and in older fish. These results were also attributed to dietary and habitat differences. Preserving uncontaminated areas by remediating soils and releasing uncontaminated fish would help restore this popular fishing area but would require a significant effort, followed by a waiting period to allow activity concentrations to fall below the threshold limits for consumption. - Highlight: • Concentration of {sup 137}Cs in brown trout was higher than in rainbow trout. • {sup 137}Cs concentration of brown trout in a lake was higher than in a stream. • {sup 137}Cs concentration of stream charr was higher in region with higher aerial activity. • Concentration of {sup 137}Cs in stream charr was higher in older fish. • Difference of contamination among fishes was due to difference in diet and habitat.

  7. Evaluation of the streaming-matrix method for discrete-ordinates duct-streaming calculations

    International Nuclear Information System (INIS)

    Clark, B.A.; Urban, W.T.; Dudziak, D.J.

    1983-01-01

    A new deterministic streaming technique called the Streaming Matrix Hybrid Method (SMHM) is applied to two realistic duct-shielding problems. The results are compared to standard discrete-ordinates and Monte Carlo calculations. The SMHM shows promise as an alternative deterministic streaming method to standard discrete-ordinates

  8. Subband Adaptive Array for DS-CDMA Mobile Radio

    Directory of Open Access Journals (Sweden)

    Tran Xuan Nam

    2004-01-01

    Full Text Available We propose a novel scheme of subband adaptive array (SBAA for direct-sequence code division multiple access (DS-CDMA. The scheme exploits the spreading code and pilot signal as the reference signal to estimate the propagation channel. Moreover, instead of combining the array outputs at each output tap using a synthesis filter and then despreading them, we despread directly the array outputs at each output tap by the desired user's code to save the synthesis filter. Although its configuration is far different from that of 2D RAKEs, the proposed scheme exhibits relatively equivalent performance of 2D RAKEs while having less computation load due to utilising adaptive signal processing in subbands. Simulation programs are carried out to explore the performance of the scheme and compare its performance with that of the standard 2D RAKE.

  9. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  10. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    Energy Technology Data Exchange (ETDEWEB)

    Warburton, W.K., E-mail: bill@xia.com [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Harris, J.T. [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Friedrich, S. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100–2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays – currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I–V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  11. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  12. Symposium on the Nature of Science—Streaming Video Archive

    Science.gov (United States)

    Oddone – Welcome Mark Ratner – Nano 201: A Gentle Introduction to Nanotechnology and Nanoscience Marsha Physical Sciences Neil Kelleher – How a Chemist Needs Computer Science, Biology, and Engineering to Push – Incorporating Nanotechnology into the Curriculum (streamed session not available) Rich Marvin – Using

  13. Consequences of variation in stream-landscape connections for stream nitrate retention and export

    Science.gov (United States)

    Handler, A. M.; Helton, A. M.; Grimm, N. B.

    2017-12-01

    Hydrologic and material connections among streams, the surrounding terrestrial landscape, and groundwater systems fluctuate between extremes in dryland watersheds, yet the consequences of this variation for stream nutrient retention and export remain uncertain. We explored how seasonal variation in hydrologic connection among streams, landscapes, and groundwater affect nitrate and ammonium concentrations across a dryland stream network and how this variation mediates in-stream nitrate uptake and watershed export. We conducted spatial surveys of stream nitrate and ammonium concentration across the 1200 km2 Oak Creek watershed in central Arizona (USA). In addition, we conducted pulse releases of a solution containing biologically reactive sodium nitrate, with sodium chloride as a conservative hydrologic tracer, to estimate nitrate uptake rates in the mainstem (Q>1000 L/s) and two tributaries. Nitrate and ammonium concentrations generally increased from headwaters to mouth in the mainstem. Locally elevated concentrations occurred in spring-fed tributaries draining fish hatcheries and larger irrigation ditches, but did not have a substantial effect on the mainstem nitrogen load. Ambient nitrate concentration (as N) ranged from below the analytical detection limit of 0.005 mg/L to 0.43 mg/L across all uptake experiments. Uptake length—average stream distance traveled for a nutrient atom from the point of release to its uptake—at ambient concentration ranged from 250 to 704 m and increased significantly with higher discharge, both across streams and within the same stream on different experiment dates. Vertical uptake velocity and aerial uptake rate ranged from 6.6-10.6 mm min-1 and 0.03 to 1.4 mg N m-2 min-1, respectively. Preliminary analyses indicate potentially elevated nitrogen loading to the lower portion of the watershed during seasonal precipitation events, but overall, the capacity for nitrate uptake is high in the mainstem and tributaries. Ongoing work

  14. ISP: an optimal out-of-core image-set processing streaming architecture for parallel heterogeneous systems.

    Science.gov (United States)

    Ha, Linh Khanh; Krüger, Jens; Dihl Comba, João Luiz; Silva, Cláudio T; Joshi, Sarang

    2012-06-01

    Image population analysis is the class of statistical methods that plays a central role in understanding the development, evolution, and disease of a population. However, these techniques often require excessive computational power and memory that are compounded with a large number of volumetric inputs. Restricted access to supercomputing power limits its influence in general research and practical applications. In this paper we introduce ISP, an Image-Set Processing streaming framework that harnesses the processing power of commodity heterogeneous CPU/GPU systems and attempts to solve this computational problem. In ISP, we introduce specially designed streaming algorithms and data structures that provide an optimal solution for out-of-core multiimage processing problems both in terms of memory usage and computational efficiency. ISP makes use of the asynchronous execution mechanism supported by parallel heterogeneous systems to efficiently hide the inherent latency of the processing pipeline of out-of-core approaches. Consequently, with computationally intensive problems, the ISP out-of-core solution can achieve the same performance as the in-core solution. We demonstrate the efficiency of the ISP framework on synthetic and real datasets.

  15. Real-time WAMI streaming target tracking in fog

    Science.gov (United States)

    Chen, Yu; Blasch, Erik; Chen, Ning; Deng, Anna; Ling, Haibin; Chen, Genshe

    2016-05-01

    Real-time information fusion based on WAMI (Wide-Area Motion Imagery), FMV (Full Motion Video), and Text data is highly desired for many mission critical emergency or security applications. Cloud Computing has been considered promising to achieve big data integration from multi-modal sources. In many mission critical tasks, however, powerful Cloud technology cannot satisfy the tight latency tolerance as the servers are allocated far from the sensing platform, actually there is no guaranteed connection in the emergency situations. Therefore, data processing, information fusion, and decision making are required to be executed on-site (i.e., near the data collection). Fog Computing, a recently proposed extension and complement for Cloud Computing, enables computing on-site without outsourcing jobs to a remote Cloud. In this work, we have investigated the feasibility of processing streaming WAMI in the Fog for real-time, online, uninterrupted target tracking. Using a single target tracking algorithm, we studied the performance of a Fog Computing prototype. The experimental results are very encouraging that validated the effectiveness of our Fog approach to achieve real-time frame rates.

  16. Streaming tearing mode

    Science.gov (United States)

    Shigeta, M.; Sato, T.; Dasgupta, B.

    1985-01-01

    The magnetohydrodynamic stability of streaming tearing mode is investigated numerically. A bulk plasma flow parallel to the antiparallel magnetic field lines and localized in the neutral sheet excites a streaming tearing mode more strongly than the usual tearing mode, particularly for the wavelength of the order of the neutral sheet width (or smaller), which is stable for the usual tearing mode. Interestingly, examination of the eigenfunctions of the velocity perturbation and the magnetic field perturbation indicates that the streaming tearing mode carries more energy in terms of the kinetic energy rather than the magnetic energy. This suggests that the streaming tearing mode instability can be a more feasible mechanism of plasma acceleration than the usual tearing mode instability.

  17. High Performance Systolic Array Core Architecture Design for DNA Sequencer

    Directory of Open Access Journals (Sweden)

    Saiful Nurdin Dayana

    2018-01-01

    Full Text Available This paper presents a high performance systolic array (SA core architecture design for Deoxyribonucleic Acid (DNA sequencer. The core implements the affine gap penalty score Smith-Waterman (SW algorithm. This time-consuming local alignment algorithm guarantees optimal alignment between DNA sequences, but it requires quadratic computation time when performed on standard desktop computers. The use of linear SA decreases the time complexity from quadratic to linear. In addition, with the exponential growth of DNA databases, the SA architecture is used to overcome the timing issue. In this work, the SW algorithm has been captured using Verilog Hardware Description Language (HDL and simulated using Xilinx ISIM simulator. The proposed design has been implemented in Xilinx Virtex -6 Field Programmable Gate Array (FPGA and improved in the core area by 90% reduction.

  18. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  19. The distribution of copper in stream sediments in an anomalous stream near Steinkopf, Namaqualand

    International Nuclear Information System (INIS)

    De Bruin, D.

    1987-01-01

    Anomalous copper concentrations detected by the regional stream-sediment programme of the Geological Survey was investigated in a stream near Steinkopf, Namaqualand. A follow-up disclosed the presence of malachite mineralization. However, additional stream-sediment samples collected from the 'anomalous' stream revealed an erratic distribution of copper and also that the malachite mineralization had no direct effect on the copper distribution in the stream sediments. Low partial-extraction yields, together with X-ray diffraction analyses, indicated that dispersion is mainly mechanical and that the copper occurs as cations in the lattice of the biotite fraction of the stream sediments. (author). 8 refs., 5 figs., 1 tab

  20. The distribution of copper in stream sediments in an anomalous stream near Steinkopf, Namaqualand

    Energy Technology Data Exchange (ETDEWEB)

    De Bruin, D

    1987-01-01

    Anomalous copper concentrations detected by the regional stream-sediment programme of the Geological Survey was investigated in a stream near Steinkopf, Namaqualand. A follow-up disclosed the presence of malachite mineralization. However, additional stream-sediment samples collected from the 'anomalous' stream revealed an erratic distribution of copper and also that the malachite mineralization had no direct effect on the copper distribution in the stream sediments. Low partial-extraction yields, together with X-ray diffraction analyses, indicated that dispersion is mainly mechanical and that the copper occurs as cations in the lattice of the biotite fraction of the stream sediments. (author). 8 refs., 5 figs., 1 tab.

  1. Affective three-dimensional brain-computer interface created using a prism array-based display

    Science.gov (United States)

    Mun, Sungchul; Park, Min-Chul

    2014-12-01

    To avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we applied a prism array-based display when presenting three-dimensional (3-D) objects. Emotional pictures were used as visual stimuli to increase the signal-to-noise ratios of steady-state visually evoked potentials (SSVEPs) because involuntarily motivated selective attention by affective mechanisms can enhance SSVEP amplitudes, thus producing increased interaction efficiency. Ten male and nine female participants voluntarily participated in our experiments. Participants were asked to control objects under three viewing conditions: two-dimension (2-D), stereoscopic 3-D, and prism. The participants performed each condition in a counter-balanced order. One-way repeated measures analysis of variance showed significant increases in the positive predictive values in the prism condition compared to the 2-D and 3-D conditions. Participants' subjective ratings of realness and engagement were also significantly greater in the prism condition than in the 2-D and 3-D conditions, while the ratings for visual fatigue were significantly reduced in the prism condition than in the 3-D condition. The proposed methods are expected to enhance the sense of reality in 3-D space without causing critical visual fatigue. In addition, people who are especially susceptible to stereoscopic 3-D may be able to use the affective brain-computer interface.

  2. Volumetric real-time imaging using a CMUT ring array.

    Science.gov (United States)

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N; O'Donnell, Matthew; Sahn, David J; Khuri-Yakub, Butrus T

    2012-06-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device. This paper presents simulated and experimental imaging results for the described CMUT ring array. Three different imaging methods--flash, classic phased array (CPA), and synthetic phased array (SPA)--were used in the study. For SPA imaging, two techniques to improve the image quality--Hadamard coding and aperture weighting--were also applied. The results show that SPA with Hadamard coding and aperture weighting is a good option for ring-array imaging. Compared with CPA, it achieves better image resolution and comparable signal-to-noise ratio at a much faster image acquisition rate. Using this method, a fast frame rate of up to 463 volumes per second is achievable if limited only by the ultrasound time of flight; with the described system we reconstructed three cross-sectional images in real-time at 10 frames per second, which was limited by the computation time in synthetic beamforming.

  3. A Field Programmable Gate Array-Based Reconfigurable Smart-Sensor Network for Wireless Monitoring of New Generation Computer Numerically Controlled Machines

    Directory of Open Access Journals (Sweden)

    Ion Stiharu

    2010-08-01

    Full Text Available Computer numerically controlled (CNC machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA-based sensor node.

  4. A Field Programmable Gate Array-Based Reconfigurable Smart-Sensor Network for Wireless Monitoring of New Generation Computer Numerically Controlled Machines

    Science.gov (United States)

    Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node. PMID:22163602

  5. Timed arrays wideband and time varying antenna arrays

    CERN Document Server

    Haupt, Randy L

    2015-01-01

    Introduces timed arrays and design approaches to meet the new high performance standards The author concentrates on any aspect of an antenna array that must be viewed from a time perspective. The first chapters briefly introduce antenna arrays and explain the difference between phased and timed arrays. Since timed arrays are designed for realistic time-varying signals and scenarios, the book also reviews wideband signals, baseband and passband RF signals, polarization and signal bandwidth. Other topics covered include time domain, mutual coupling, wideband elements, and dispersion. The auth

  6. A morphological comparison of narrow, low-gradient streams traversing wetland environments to alluvial streams.

    Science.gov (United States)

    Jurmu, Michael C

    2002-12-01

    Twelve morphological features from research on alluvial streams are compared in four narrow, low-gradient wetland streams located in different geographic regions (Connecticut, Indiana, and Wisconsin, USA). All four reaches differed in morphological characteristics in five of the features compared (consistent bend width, bend cross-sectional shape, riffle width compared to pool width, greatest width directly downstream of riffles, and thalweg location), while three reaches differed in two comparisons (mean radius of curvature to width ratio and axial wavelength to width ratio). The remaining five features compared had at least one reach where different characteristics existed. This indicates the possibility of varying morphology for streams traversing wetland areas further supporting the concept that the unique qualities of wetland environments might also influence the controls on fluvial dynamics and the development of streams. If certain morphological features found in streams traversing wetland areas differ from current fluvial principles, then these varying features should be incorporated into future wetland stream design and creation projects. The results warrant further research on other streams traversing wetlands to determine if streams in these environments contain unique morphology and further investigation of the impact of low-energy fluvial processes on morphological development. Possible explanations for the morphology deviations in the study streams and some suggestions for stream design in wetland areas based upon the results and field observations are also presented.

  7. Voltage splay modes and enhanced phase locking in a modified linear Josephson array

    Science.gov (United States)

    Harris, E. B.; Garland, J. C.

    1997-02-01

    We analyze a modified linear Josephson-junction array in which additional unbiased junctions are used to greatly enhance phase locking. This geometry exhibits strong correlated behavior, with an external magnetic field tuning the voltage splay angle between adjacent Josephson oscillators. The array displays a coherent in-phase mode for f=, where f is the magnetic frustration, while for 0tolerant of critical current disorder approaching 100%. The stability of the array has also been studied by computing Floquet exponents. These exponents are found to be negative for all array lengths, with a 1/N2 dependence, N being the number of series-connected junctions.

  8. Short-term stream flow forecasting at Australian river sites using data-driven regression techniques

    CSIR Research Space (South Africa)

    Steyn, Melise

    2017-09-01

    Full Text Available This study proposes a computationally efficient solution to stream flow forecasting for river basins where historical time series data are available. Two data-driven modeling techniques are investigated, namely support vector regression...

  9. A FPC-ROOT Algorithm for 2D-DOA Estimation in Sparse Array

    Directory of Open Access Journals (Sweden)

    Wenhao Zeng

    2016-01-01

    Full Text Available To improve the performance of two-dimensional direction-of-arrival (2D DOA estimation in sparse array, this paper presents a Fixed Point Continuation Polynomial Roots (FPC-ROOT algorithm. Firstly, a signal model for DOA estimation is established based on matrix completion and it can be proved that the proposed model meets Null Space Property (NSP. Secondly, left and right singular vectors of received signals matrix are achieved using the matrix completion algorithm. Finally, 2D DOA estimation can be acquired through solving the polynomial roots. The proposed algorithm can achieve high accuracy of 2D DOA estimation in sparse array, without solving autocorrelation matrix of received signals and scanning of two-dimensional spectral peak. Besides, it decreases the number of antennas and lowers computational complexity and meanwhile avoids the angle ambiguity problem. Computer simulations demonstrate that the proposed FPC-ROOT algorithm can obtain the 2D DOA estimation precisely in sparse array.

  10. InSTREAM: the individual-based stream trout research and environmental assessment model

    Science.gov (United States)

    Steven F. Railsback; Bret C. Harvey; Stephen K. Jackson; Roland H. Lamberson

    2009-01-01

    This report documents Version 4.2 of InSTREAM, including its formulation, software, and application to research and management problems. InSTREAM is a simulation model designed to understand how stream and river salmonid populations respond to habitat alteration, including altered flow, temperature, and turbidity regimes and changes in channel morphology. The model...

  11. Re-Meandering of Lowland Streams

    DEFF Research Database (Denmark)

    Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai

    2014-01-01

    We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats and macroinver...

  12. A class of parallel algorithms for computation of the manipulator inertia matrix

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  13. Analytical Models of Exoplanetary Atmospheres. IV. Improved Two-stream Radiative Transfer for the Treatment of Aerosols

    International Nuclear Information System (INIS)

    Heng, Kevin; Kitzmann, Daniel

    2017-01-01

    We present a novel generalization of the two-stream method of radiative transfer, which allows for the accurate treatment of radiative transfer in the presence of strong infrared scattering by aerosols. We prove that this generalization involves only a simple modification of the coupling coefficients and transmission functions in the hemispheric two-stream method. This modification originates from allowing the ratio of the first Eddington coefficients to depart from unity. At the heart of the method is the fact that this ratio may be computed once and for all over the entire range of values of the single-scattering albedo and scattering asymmetry factor. We benchmark our improved two-stream method by calculating the fraction of flux reflected by a single atmospheric layer (the reflectivity) and comparing these calculations to those performed using a 32-stream discrete-ordinates method. We further compare our improved two-stream method to the two-stream source function (16 streams) and delta-Eddington methods, demonstrating that it is often more accurate at the order-of-magnitude level. Finally, we illustrate its accuracy using a toy model of the early Martian atmosphere hosting a cloud layer composed of carbon dioxide ice particles. The simplicity of implementation and accuracy of our improved two-stream method renders it suitable for implementation in three-dimensional general circulation models. In other words, our improved two-stream method has the ease of implementation of a standard two-stream method, but the accuracy of a 32-stream method.

  14. Analytical Models of Exoplanetary Atmospheres. IV. Improved Two-stream Radiative Transfer for the Treatment of Aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Heng, Kevin; Kitzmann, Daniel, E-mail: kevin.heng@csh.unibe.ch, E-mail: daniel.kitzmann@csh.unibe.ch [University of Bern, Center for Space and Habitability, Gesellschaftsstrasse 6, CH-3012, Bern (Switzerland)

    2017-10-01

    We present a novel generalization of the two-stream method of radiative transfer, which allows for the accurate treatment of radiative transfer in the presence of strong infrared scattering by aerosols. We prove that this generalization involves only a simple modification of the coupling coefficients and transmission functions in the hemispheric two-stream method. This modification originates from allowing the ratio of the first Eddington coefficients to depart from unity. At the heart of the method is the fact that this ratio may be computed once and for all over the entire range of values of the single-scattering albedo and scattering asymmetry factor. We benchmark our improved two-stream method by calculating the fraction of flux reflected by a single atmospheric layer (the reflectivity) and comparing these calculations to those performed using a 32-stream discrete-ordinates method. We further compare our improved two-stream method to the two-stream source function (16 streams) and delta-Eddington methods, demonstrating that it is often more accurate at the order-of-magnitude level. Finally, we illustrate its accuracy using a toy model of the early Martian atmosphere hosting a cloud layer composed of carbon dioxide ice particles. The simplicity of implementation and accuracy of our improved two-stream method renders it suitable for implementation in three-dimensional general circulation models. In other words, our improved two-stream method has the ease of implementation of a standard two-stream method, but the accuracy of a 32-stream method.

  15. Array processing for seismic surface waves

    Energy Technology Data Exchange (ETDEWEB)

    Marano, S.

    2013-07-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries.

  16. Array processing for seismic surface waves

    International Nuclear Information System (INIS)

    Marano, S.

    2013-01-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries

  17. Third harmonic generation by Bloch-oscillating electrons in a quasioptical array

    International Nuclear Information System (INIS)

    Ghosh, A.W.; Wanke, M.C.; Allen, S.J.; Wilkins, J.W.

    1999-01-01

    We compute the third harmonic field generated by Bloch-oscillating electrons in a quasioptical array of superlattices under THz irradiation. The third harmonic power transmitted oscillates with the internal electric field, with nodes associated with Bessel functions in eEd/ℎω. The nonlinear response of the array causes the output power to be a multivalued function of the incident laser power. The output can be optimized by adjusting the frequency of the incident pulse to match one of the Fabry-Pacute erot resonances in the substrate. Within the transmission-line model of the array, the maximum conversion efficiency is 0.1%. copyright 1999 American Institute of Physics

  18. New Three-Dimensional Neutron Transport Calculation Capability in STREAM Code

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Youqi [Xi' an Jiaotong University, Xi' an (China); Choi, Sooyoung; Lee, Deokjung [UNIST, Ulsan (Korea, Republic of)

    2016-10-15

    The method of characteristics (MOC) is one of the best choices for its powerful capability in the geometry modeling. To reduce the large computational burden in 3D MOC, the 2D/1D schemes were proposed and have achieved great success in the past 10 years. However, such methods have some instability problems during the iterations when the neutron leakage for axial direction is large. Therefore, full 3D MOC methods were developed. A lot of efforts have been devoted to reduce the computational costs. However, it still requires too much memory storage and computational time for the practical modeling of a commercial size reactor core. Recently, a new approach for the 3D MOC calculation without transverse integration has been implemented in the STREAM code. In this approach, the angular flux is expressed as a basis function expansion form of only axial variable z. A new approach based on the axial expansion and 2D MOC sweeping to solve the 3D neutron transport equation is implemented in the STREAM code. This approach avoids using the transverse integration in the traditional 2D/1D scheme of MOC calculation. By converting the 3D equation into the 2D form of angular flux expansion coefficients, it also avoids the complex 3D ray tracing. Current numerical tests using two benchmarks show good accuracy of the new method.

  19. Experimental investigation of acoustic streaming in a cylindrical wave guide up to high streaming Reynolds numbers.

    Science.gov (United States)

    Reyt, Ida; Bailliet, Hélène; Valière, Jean-Christophe

    2014-01-01

    Measurements of streaming velocity are performed by means of Laser Doppler Velocimetry and Particle Image Velociimetry in an experimental apparatus consisting of a cylindrical waveguide having one loudspeaker at each end for high intensity sound levels. The case of high nonlinear Reynolds number ReNL is particularly investigated. The variation of axial streaming velocity with respect to the axial and to the transverse coordinates are compared to available Rayleigh streaming theory. As expected, the measured streaming velocity agrees well with the Rayleigh streaming theory for small ReNL but deviates significantly from such predictions for high ReNL. When the nonlinear Reynolds number is increased, the outer centerline axial streaming velocity gets distorted towards the acoustic velocity nodes until counter-rotating additional vortices are generated near the acoustic velocity antinodes. This kind of behavior is followed by outer streaming cells only and measurements in the near wall region show that inner streaming vortices are less affected by this substantial evolution of fast streaming pattern. Measurements of the transient evolution of streaming velocity provide an additional insight into the evolution of fast streaming.

  20. Ambient noise forecasting with a large acoustic array in a complex shallow water environment.

    Science.gov (United States)

    Rogers, Jeffrey S; Wales, Stephen C; Means, Steven L

    2017-11-01

    Forecasting ambient noise levels in the ocean can be a useful way of characterizing the detection performance of sonar systems and projecting bounds on performance into the near future. The assertion is that noise forecasting can be improved with a priori knowledge of source positions coupled with the ability to resolve closely separated sources in bearing. One example of such a system is the large aperture research array located at the South Florida Test Facility. Given radar and Automatic Identification System defined source positions and environmental information, transmission loss (TL) is computed from known source positions to the array. Source levels (SLs) of individual ships are then estimated from computed TL and the pre-determined beam response of the array using a non-negative least squares algorithm. Ambient noise forecasts are formed by projecting the estimated SLs along known ship tracks. Ambient noise forecast estimates are compared to measured beam level data and mean-squared error is computed. A mean squared error as low as 3.5 dB is demonstrated in 30 min forecast estimates when compared to ground truth.

  1. Streaming movies, media, and instant access

    CERN Document Server

    Dixon, Wheeler Winston

    2013-01-01

    Film stocks are vanishing, but the iconic images of the silver screen remain -- albeit in new, sleeker formats. Today, viewers can instantly stream movies on televisions, computers, and smartphones. Gone are the days when films could only be seen in theaters or rented at video stores: movies are now accessible at the click of a button, and there are no reels, tapes, or discs to store. Any film or show worth keeping may be collected in the virtual cloud and accessed at will through services like Netflix, Hulu, and Amazon Instant.The movies have changed, and we are changing with them.

  2. A Statistical Method to Predict Flow Permanence in Dryland Streams from Time Series of Stream Temperature

    Directory of Open Access Journals (Sweden)

    Ivan Arismendi

    2017-12-01

    Full Text Available Intermittent and ephemeral streams represent more than half of the length of the global river network. Dryland freshwater ecosystems are especially vulnerable to changes in human-related water uses as well as shifts in terrestrial climates. Yet, the description and quantification of patterns of flow permanence in these systems is challenging mostly due to difficulties in instrumentation. Here, we took advantage of existing stream temperature datasets in dryland streams in the northwest Great Basin desert, USA, to extract critical information on climate-sensitive patterns of flow permanence. We used a signal detection technique, Hidden Markov Models (HMMs, to extract information from daily time series of stream temperature to diagnose patterns of stream drying. Specifically, we applied HMMs to time series of daily standard deviation (SD of stream temperature (i.e., dry stream channels typically display highly variable daily temperature records compared to wet stream channels between April and August (2015–2016. We used information from paired stream and air temperature data loggers as well as co-located stream temperature data loggers with electrical resistors as confirmatory sources of the timing of stream drying. We expanded our approach to an entire stream network to illustrate the utility of the method to detect patterns of flow permanence over a broader spatial extent. We successfully identified and separated signals characteristic of wet and dry stream conditions and their shifts over time. Most of our study sites within the entire stream network exhibited a single state over the entire season (80%, but a portion of them showed one or more shifts among states (17%. We provide recommendations to use this approach based on a series of simple steps. Our findings illustrate a successful method that can be used to rigorously quantify flow permanence regimes in streams using existing records of stream temperature.

  3. A statistical method to predict flow permanence in dryland streams from time series of stream temperature

    Science.gov (United States)

    Arismendi, Ivan; Dunham, Jason B.; Heck, Michael; Schultz, Luke; Hockman-Wert, David

    2017-01-01

    Intermittent and ephemeral streams represent more than half of the length of the global river network. Dryland freshwater ecosystems are especially vulnerable to changes in human-related water uses as well as shifts in terrestrial climates. Yet, the description and quantification of patterns of flow permanence in these systems is challenging mostly due to difficulties in instrumentation. Here, we took advantage of existing stream temperature datasets in dryland streams in the northwest Great Basin desert, USA, to extract critical information on climate-sensitive patterns of flow permanence. We used a signal detection technique, Hidden Markov Models (HMMs), to extract information from daily time series of stream temperature to diagnose patterns of stream drying. Specifically, we applied HMMs to time series of daily standard deviation (SD) of stream temperature (i.e., dry stream channels typically display highly variable daily temperature records compared to wet stream channels) between April and August (2015–2016). We used information from paired stream and air temperature data loggers as well as co-located stream temperature data loggers with electrical resistors as confirmatory sources of the timing of stream drying. We expanded our approach to an entire stream network to illustrate the utility of the method to detect patterns of flow permanence over a broader spatial extent. We successfully identified and separated signals characteristic of wet and dry stream conditions and their shifts over time. Most of our study sites within the entire stream network exhibited a single state over the entire season (80%), but a portion of them showed one or more shifts among states (17%). We provide recommendations to use this approach based on a series of simple steps. Our findings illustrate a successful method that can be used to rigorously quantify flow permanence regimes in streams using existing records of stream temperature.

  4. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-01

    to develop novel co-prime sampling and array design strategies that achieve high-resolution estimation of spectral power distributions and signal...by the array geometry and the frequency offset. We overcome this limitation by introducing a novel sparsity-based multi-target localization approach...estimation using a sparse uniform linear array with two CW signals of co-prime frequencies,” IEEE International Workshop on Computational Advances

  5. Programmable architecture for quantum computing

    NARCIS (Netherlands)

    Chen, J.; Wang, L.; Charbon, E.; Wang, B.

    2013-01-01

    A programmable architecture called “quantum FPGA (field-programmable gate array)” (QFPGA) is presented for quantum computing, which is a hybrid model combining the advantages of the qubus system and the measurement-based quantum computation. There are two kinds of buses in QFPGA, the local bus and

  6. Connectivity and conditional models of access and abundance of species in stream networks.

    Science.gov (United States)

    Chelgren, Nathan D; Dunham, Jason B

    2015-07-01

    less than 500 (longnose dace) to greater than 100 000 (sculpin). Although our framework can address the question of effectiveness in a broad array of stream and crossing configurations, much stronger inferences would be possible if future restoration efforts were designed to address the limitations we encountered in this study, particularly the lack of available information on crossings and species presence prior to restoration, and nonrandom selection of crossings to be replaced.

  7. Modeling of immision from power plants using stream-diffusion model

    International Nuclear Information System (INIS)

    Kanevce, Lj.; Kanevce, G.; Markoski, A.

    1996-01-01

    Analyses of simple empirical and integral immision models, comparing with complex three dimensional differential models is given. Complex differential models needs huge computer power, so they can't be useful for practical engineering calculations. In this paper immision modeling, using stream-diffusion approach is presented. Process of dispersion is divided into two parts. First part is called stream part, it's near the source of the pollutants, and it's presented with defected turbulent jet in wind field. This part finished when the velocity of stream (jet) becomes equal with wind speed. Boundary conditions in the end of the first part, are initial for the second, called diffusion part, which is modeling with tri dimensional diffusion equation. Gradient of temperature, wind speed profile and coefficient of diffusion in this model must not be constants, they can change with the height. Presented model is much simpler than the complete meteorological differential models which calculates whole fields of meteorological parameters. Also, it is more complex and gives more valuable results for dispersion of pollutants from widely used integral and empirical models

  8. Non-streaming high-efficiency perforated semiconductor neutron detectors, methods of making same and measuring wand and detector modules utilizing same

    Science.gov (United States)

    McGregor, Douglas S.; Shultis, John K.; Rice, Blake B.; McNeil, Walter J.; Solomon, Clell J.; Patterson, Eric L.; Bellinger, Steven L.

    2010-12-21

    Non-streaming high-efficiency perforated semiconductor neutron detectors, method of making same and measuring wands and detector modules utilizing same are disclosed. The detectors have improved mechanical structure, flattened angular detector responses, and reduced leakage current. A plurality of such detectors can be assembled into imaging arrays, and can be used for neutron radiography, remote neutron sensing, cold neutron imaging, SNM monitoring, and various other applications.

  9. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  10. Productivity of Stream Definitions

    NARCIS (Netherlands)

    Endrullis, Jörg; Grabmayer, Clemens; Hendriks, Dimitri; Isihara, Ariya; Klop, Jan

    2007-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continuously in such a way that a uniquely determined stream is obtained as the limit. Whereas productivity is undecidable

  11. Productivity of stream definitions

    NARCIS (Netherlands)

    Endrullis, J.; Grabmayer, C.A.; Hendriks, D.; Isihara, A.; Klop, J.W.

    2008-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continually in such a way that a uniquely determined stream in constructor normal form is obtained as the limit. Whereas

  12. Streaming Model Based Volume Ray Casting Implementation for Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Jusub Kim

    2009-01-01

    Full Text Available Interactive high quality volume rendering is becoming increasingly more important as the amount of more complex volumetric data steadily grows. While a number of volumetric rendering techniques have been widely used, ray casting has been recognized as an effective approach for generating high quality visualization. However, for most users, the use of ray casting has been limited to datasets that are very small because of its high demands on computational power and memory bandwidth. However the recent introduction of the Cell Broadband Engine (Cell B.E. processor, which consists of 9 heterogeneous cores designed to handle extremely demanding computations with large streams of data, provides an opportunity to put the ray casting into practical use. In this paper, we introduce an efficient parallel implementation of volume ray casting on the Cell B.E. The implementation is designed to take full advantage of the computational power and memory bandwidth of the Cell B.E. using an intricate orchestration of the ray casting computation on the available heterogeneous resources. Specifically, we introduce streaming model based schemes and techniques to efficiently implement acceleration techniques for ray casting on Cell B.E. In addition to ensuring effective SIMD utilization, our method provides two key benefits: there is no cost for empty space skipping and there is no memory bottleneck on moving volumetric data for processing. Our experimental results show that we can interactively render practical datasets on a single Cell B.E. processor.

  13. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  14. Influence of anodization parameters on the morphology of TiO 2 nanotube arrays

    Science.gov (United States)

    Omidvar, Hamid; Goodarzi, Saba; Seif, Ahmad; Azadmehr, Amir R.

    2011-07-01

    TiO 2 nanotube arrays can be fabricated by electrochemical anodization in organic and inorganic electrolytes. Morphology of these nanotube arrays changes when anodization parameters such as applied voltage, type of electrolyte, time and temperature are varied. Nanotube arrays fabricated by anodization of commercial titanium in electrolytes containing NH 4F solution and either sulfuric or phosphoric acid were studied at room temperature; time of anodization was kept constant. Applied voltage, fluoride ion concentration, and acid concentrations were varied and their influences on TiO 2 nanotubes were investigated. The current density of anodizing was recorded by computer controlled digital multimeter. The surface morphology (top-view) of nanotube arrays were observed by SEM. The nanotube arrays in this study have inner diameters in range of 40-80 nm.

  15. Cryogenic deuterium Z-pinch and wire array Z-pinch studies at Imperial College

    International Nuclear Information System (INIS)

    Haines, M.G.; Aliaga-Rossel, R.; Beg, N.F.

    2001-01-01

    Z-pinch experiments using cryogenic deuterium fibre loads have been carried out on the MAGPIE generator at currents up to 1.4MA. M=0 instabilities in the corona caused plasma expansion and disruption before the plasma could enter the collisionless Large ion Larmor radius regime. For the last 12 months we have studied Aluminium wire array implosions using laser probing, optical streaks and gated X-ray images. Plasma from the wires in accelerated to the axis as radial plasma streams with uncorrelated m=0 instabilities superimposed. Later in the discharge a global Rayleigh-Taylor (R-T) instability develops. Single and double aluminium and tungsten wire shots were conducted at 150kA. 2-D and 3-D simulations and a heuristic model of wire arrays will be presented along with theories on the combined MHD/R-T instability and sheared axial flow generation by large ion Larmor radius effects. (author)

  16. Cryogenic deuterium Z-pinch and wire array Z-pinch studies at imperial college

    International Nuclear Information System (INIS)

    Haines, M.G.; Aliaga-Rossel, R.; Beg, F.N.

    1999-01-01

    Z-pinch experiments using cryogenic deuterium fibre loads have been carried out on the MAGPIE generator at currents up to 1.4MA. M=0 instabilities in the corona caused plasma expansion and disruption before the plasma could enter the collisionless Large ion Larmor radius regime. For the last 12 months we have studied Aluminium wire array implosions using laser probing, optical streaks and gated X-ray images. Plasma from the wires in accelerated to the axis as radial plasma streams with uncorrelated m=0 instabilities superimposed. Later in the discharge a global Rayleigh-Taylor (R-T) instability develops. Single and double aluminium and tungsten wire shots were conducted at 150kA. 2-D and 3-D simulations and a heuristic model of wire arrays will be presented along with theories on the combined MHD/R-T instability and sheared axial flow generation by large ion Larmor radius effects. (author)

  17. Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid.

    Science.gov (United States)

    Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz

    2016-01-01

    Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead.

  18. Frequency-domain imaging algorithm for ultrasonic testing by application of matrix phased arrays

    Directory of Open Access Journals (Sweden)

    Dolmatov Dmitry

    2017-01-01

    Full Text Available Constantly increasing demand for high-performance materials and systems in aerospace industry requires advanced methods of nondestructive testing. One of the most promising methods is ultrasonic imaging by using matrix phased arrays. This technique allows to create three-dimensional ultrasonic imaging with high lateral resolution. Further progress in matrix phased array ultrasonic testing is determined by the development of fast imaging algorithms. In this article imaging algorithm based on frequency domain calculations is proposed. This approach is computationally efficient in comparison with time domain algorithms. Performance of the proposed algorithm was tested via computer simulations for planar specimen with flat bottom holes.

  19. Streams and their future inhabitants

    DEFF Research Database (Denmark)

    Sand-Jensen, K.; Friberg, Nikolai

    2006-01-01

    In this fi nal chapter we look ahead and address four questions: How do we improve stream management? What are the likely developments in the biological quality of streams? In which areas is knowledge on stream ecology insuffi cient? What can streams offer children of today and adults of tomorrow?...

  20. Salamander occupancy in headwater stream networks

    Science.gov (United States)

    Grant, E.H.C.; Green, L.E.; Lowe, W.H.

    2009-01-01

    1. Stream ecosystems exhibit a highly consistent dendritic geometry in which linear habitat units intersect to create a hierarchical network of connected branches. 2. Ecological and life history traits of species living in streams, such as the potential for overland movement, may interact with this architecture to shape patterns of occupancy and response to disturbance. Specifically, large-scale habitat alteration that fragments stream networks and reduces connectivity may reduce the probability a stream is occupied by sensitive species, such as stream salamanders. 3. We collected habitat occupancy data on four species of stream salamanders in first-order (i.e. headwater) streams in undeveloped and urbanised regions of the eastern U.S.A. We then used an information-theoretic approach to test alternative models of salamander occupancy based on a priori predictions of the effects of network configuration, region and salamander life history. 4. Across all four species, we found that streams connected to other first-order streams had higher occupancy than those flowing directly into larger streams and rivers. For three of the four species, occupancy was lower in the urbanised region than in the undeveloped region. 5. These results demonstrate that the spatial configuration of stream networks within protected areas affects the occurrences of stream salamander species. We strongly encourage preservation of network connections between first-order streams in conservation planning and management decisions that may affect stream species.

  1. Streaming Multimedia via Overlay Networks using Wi-Fi Peer-to-Peer Connections

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2017-01-01

    Short range ad-hoc wireless networks can be used to deliver streaming multimedia for information, entertainment and advertisement purposes. To enable short-range communication between various devices, the Wi-Fi Alliance proposed an extension to the IEEE802.11 Wi-Fi standard called Wi-Fi Peer......-to-Peer (P2P). It allows compliant devices to form ad-hoc communication groups without interrupting conventional access point-based Wi-Fi communication. This paper proposes to use Wi-Fi P2P connectivity to distribute streaming multimedia in ah-hoc formed user groups. The exchange of multimedia data...... is performed by forming an overlay network using Peer-to-Peer Streaming Peer Protocol (PPSPP). In order to make PPSPP function over WiFi P2P connections, this paper proposes a number of changes to the protocol. The performance of the proposed system is evaluated using a computer networks emulator...

  2. Matisse: A Visual Analytics System for Exploring Emotion Trends in Social Media Text Streams

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; Drouhard, Margaret MEG G [ORNL; Beaver, Justin M [ORNL; Pyle, Joshua M [ORNL; BogenII, Paul L. [Google Inc.

    2015-01-01

    Dynamically mining textual information streams to gain real-time situational awareness is especially challenging with social media systems where throughput and velocity properties push the limits of a static analytical approach. In this paper, we describe an interactive visual analytics system, called Matisse, that aids with the discovery and investigation of trends in streaming text. Matisse addresses the challenges inherent to text stream mining through the following technical contributions: (1) robust stream data management, (2) automated sentiment/emotion analytics, (3) interactive coordinated visualizations, and (4) a flexible drill-down interaction scheme that accesses multiple levels of detail. In addition to positive/negative sentiment prediction, Matisse provides fine-grained emotion classification based on Valence, Arousal, and Dominance dimensions and a novel machine learning process. Information from the sentiment/emotion analytics are fused with raw data and summary information to feed temporal, geospatial, term frequency, and scatterplot visualizations using a multi-scale, coordinated interaction model. After describing these techniques, we conclude with a practical case study focused on analyzing the Twitter sample stream during the week of the 2013 Boston Marathon bombings. The case study demonstrates the effectiveness of Matisse at providing guided situational awareness of significant trends in social media streams by orchestrating computational power and human cognition.

  3. Romanian earthquakes analysis using BURAR seismic array

    International Nuclear Information System (INIS)

    Borleanu, Felix; Rogozea, Maria; Nica, Daniela; Popescu, Emilia; Popa, Mihaela; Radulian, Mircea

    2008-01-01

    Bucovina seismic array (BURAR) is a medium-aperture array, installed in 2002 in the northern part of Romania (47.61480 N latitude, 25.21680 E longitude, 1150 m altitude), as a result of the cooperation between Air Force Technical Applications Center, USA and National Institute for Earth Physics, Romania. The array consists of ten elements, located in boreholes and distributed over a 5 x 5 km 2 area; nine with short-period vertical sensors and one with a broadband three-component sensor. Since the new station has been operating the earthquake survey of Romania's territory has been significantly improved. Data recorded by BURAR during 01.01.2005 - 12.31.2005 time interval are first processed and analyzed, in order to establish the array detection capability of the local earthquakes, occurred in different Romanian seismic zones. Subsequently a spectral ratios technique was applied in order to determine the calibration relationships for magnitude, using only the information gathered by BURAR station. The spectral ratios are computed relatively to a reference event, considered as representative for each seismic zone. This method has the advantage to eliminate the path effects. The new calibration procedure is tested for the case of Vrancea intermediate-depth earthquakes and proved to be very efficient in constraining the size of these earthquakes. (authors)

  4. Replication of optical microlens arrays using photoresist coated molds

    DEFF Research Database (Denmark)

    Chakrabarti, Maumita; Dam-Hansen, Carsten; Stubager, Jørgen

    2016-01-01

    A cost reduced method of producing injection molding tools is reported and demonstrated for the fabrication of optical microlens arrays. A standard computer-numerical-control (CNC) milling machine was used to make a rough mold in steel. Surface treatment of the steel mold by spray coating...... with photoresist is used to smooth the mold surface providing good optical quality. The tool and process are demonstrated for the fabrication of an ø50 mm beam homogenizer for a color mixing LED light engine. The acceptance angle of the microlens array is optimized, in order to maximize the optical efficiency from...

  5. Requirements for the GCFR plenum streaming experiment

    International Nuclear Information System (INIS)

    Perkins, R.G.; Rouse, C.A.; Hamilton, C.J.

    1980-09-01

    This report gives the experiment objectives and generic descriptions of experimental configurations for the gas-cooled fast breeder reactor (GCFR) plenum shield experiment. This report defines four experiment phases. Each phase represents a distinct area of uncertainty in computing radiation transport from the GCFR core to the plenums, through the upper and lower plenum shields, and ultimately to the prestressed concrete reactor vessel (PCRV) liner: (1) the shield heterogeneity phase; (2) the exit shield simulation phase; (3) the plenum streaming phase; and (4) the plenum shield simulation phase

  6. Stream hydraulics and temperature determine the metabolism of geothermal Icelandic streams

    Directory of Open Access Journals (Sweden)

    Demars B. O.L.

    2011-07-01

    Full Text Available Stream ecosystem metabolism plays a critical role in planetary biogeochemical cycling. Stream benthic habitat complexity and the available surface area for microbes relative to the free-flowing water volume are thought to be important determinants of ecosystem metabolism. Unfortunately, the engineered deepening and straightening of streams for drainage purposes could compromise stream natural services. Stream channel complexity may be quantitatively expressed with hydraulic parameters such as water transient storage, storage residence time, and water spiralling length. The temperature dependence of whole stream ecosystem respiration (ER, gross primary productivity (GPP and net ecosystem production (NEP = GPP − ER has recently been evaluated with a “natural experiment” in Icelandic geothermal streams along a 5–25 °C temperature gradient. There remained, however, a substantial amount of unexplained variability in the statistical models, which may be explained by hydraulic parameters found to be unrelated to temperature. We also specifically tested the additional and predicted synergistic effects of water transient storage and temperature on ER, using novel, more accurate, methods. Both ER and GPP were highly related to water transient storage (or water spiralling length but not to the storage residence time. While there was an additional effect of water transient storage and temperature on ER (r2 = 0.57; P = 0.015, GPP was more related to water transient storage than temperature. The predicted synergistic effect could not be confirmed, most likely due to data limitation. Our interpretation, based on causal statistical modelling, is that the metabolic balance of streams (NEP was primarily determined by the temperature dependence of respiration. Further field and experimental work is required to test the predicted synergistic effect on ER. Meanwhile, since higher metabolic activities allow for higher pollutant degradation or uptake

  7. Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

    DEFF Research Database (Denmark)

    Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken

    2012-01-01

    We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....

  8. Computer-aided meiotic maturation assay (CAMMA) of zebrafish (danio rerio) oocytes in vitro.

    Science.gov (United States)

    Lessman, Charles A; Nathani, Ravikanth; Uddin, Rafique; Walker, Jamie; Liu, Jianxiong

    2007-01-01

    We have developed a new technique called Computer-Aided Meiotic Maturation Assay (CAMMA) for imaging large arrays of zebrafish oocytes and automatically collecting image files at regular intervals during meiotic maturation. This novel method uses a transparency scanner interfaced to a computer with macro programming that automatically scans and archives the image files. Images are stacked and analyzed with ImageJ to quantify changes in optical density characteristic of zebrafish oocyte maturation. Major advantages of CAMMA include (1) ability to image very large arrays of oocytes and follow individual cells over time, (2) simultaneously image many treatment groups, (3) digitized images may be stacked, animated, and analyzed in programs such as ImageJ, NIH-Image, or ScionImage, and (4) CAMMA system is inexpensive, costing less than most microscopes used in traditional assays. We have used CAMMA to determine the dose response and time course of oocyte maturation induced by 17alpha-hydroxyprogesterone (HP). Maximal decrease in optical density occurs around 5 hr after 0.1 micro g/ml HP (28.5 degrees C), approximately 3 hr after germinal vesicle migration (GVM) and dissolution (GVD). In addition to changes in optical density, GVD is accompanied by streaming of ooplasm to the animal pole to form a blastodisc. These dynamic changes are readily visualized by animating image stacks from CAMMA; thus, CAMMA provides a valuable source of time-lapse movies for those studying zebrafish oocyte maturation. The oocyte clearing documented by CAMMA is correlated to changes in size distribution of major yolk proteins upon SDS-PAGE, and, this in turn, is related to increased cyclin B(1) protein.

  9. Application of Field programmable Gate Array to Digital Signal ...

    African Journals Online (AJOL)

    Journal of Research in National Development ... This work shows how one parallel technology Field Programmable Gate Array (FPGA) can be applied to digital signal processing problem to increase computational speed. ... In this research work FPGA typically exploits parallelism because FPGA is a parallel device. With the ...

  10. Review on Computational Electromagnetics

    Directory of Open Access Journals (Sweden)

    P. Sumithra

    2017-03-01

    Full Text Available Computational electromagnetics (CEM is applied to model the interaction of electromagnetic fields with the objects like antenna, waveguides, aircraft and their environment using Maxwell equations.  In this paper the strength and weakness of various computational electromagnetic techniques are discussed. Performance of various techniques in terms accuracy, memory and computational time for application specific tasks such as modeling RCS (Radar cross section, space applications, thin wires, antenna arrays are presented in this paper.

  11. Optimal Chunking of Large Multidimensional Arrays for Data Warehousing

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Otoo, Ekow J.; Rotem, Doron; Seshadri, Sridhar

    2008-02-15

    Very large multidimensional arrays are commonly used in data intensive scientific computations as well as on-line analytical processingapplications referred to as MOLAP. The storage organization of such arrays on disks is done by partitioning the large global array into fixed size sub-arrays called chunks or tiles that form the units of data transfer between disk and memory. Typical queries involve the retrieval of sub-arrays in a manner that access all chunks that overlap the query results. An important metric of the storage efficiency is the expected number of chunks retrieved over all such queries. The question that immediately arises is"what shapes of array chunks give the minimum expected number of chunks over a query workload?" The problem of optimal chunking was first introduced by Sarawagi and Stonebraker who gave an approximate solution. In this paper we develop exact mathematical models of the problem and provide exact solutions using steepest descent and geometric programming methods. Experimental results, using synthetic and real life workloads, show that our solutions are consistently within than 2.0percent of the true number of chunks retrieved for any number of dimensions. In contrast, the approximate solution of Sarawagi and Stonebraker can deviate considerably from the true result with increasing number of dimensions and also may lead to suboptimal chunk shapes.

  12. Manipulating Liquids With Acoustic Radiation Pressure Phased Arrays

    Science.gov (United States)

    Oeftering, Richard C.

    1999-01-01

    High-intensity ultrasound waves can produce the effects of "Acoustic Radiation Pressure" (ARP) and "acoustic streaming." These effects can be used to propel liquid flows and to apply forces that can be used to move or manipulate floating objects or liquid surfaces. NASA's interest in ARP includes the remote-control agitation of liquids and the manipulation of bubbles and drops in liquid experiments and propellant systems. A high level of flexibility is attained by using a high-power acoustic phased array to generate, steer, and focus a beam of acoustic waves. This is called an Acoustic Radiation Pressure Phased Array, or ARPPA. In this approach, many acoustic transducer elements emit wavelets that converge into a single beam of sound waves. Electronically coordinating the timing, or "phase shift," of the acoustic waves makes it possible to form a beam with a predefined direction and focus. Therefore, a user can direct the ARP force at almost any desired point within a liquid volume. ARPPA lets experimenters manipulate objects anywhere in a test volume. This flexibility allow it to be used for multiple purposes, such as to agitate liquids, deploy and manipulate drops or bubbles, and even suppress sloshing in spacecraft propellant tanks.

  13. Nitrogen saturation in stream ecosystems.

    Science.gov (United States)

    Earl, Stevan R; Valett, H Maurice; Webster, Jackson R

    2006-12-01

    The concept of nitrogen (N) saturation has organized the assessment of N loading in terrestrial ecosystems. Here we extend the concept to lotic ecosystems by coupling Michaelis-Menten kinetics and nutrient spiraling. We propose a series of saturation response types, which may be used to characterize the proximity of streams to N saturation. We conducted a series of short-term N releases using a tracer (15NO3-N) to measure uptake. Experiments were conducted in streams spanning a gradient of background N concentration. Uptake increased in four of six streams as NO3-N was incrementally elevated, indicating that these streams were not saturated. Uptake generally corresponded to Michaelis-Menten kinetics but deviated from the model in two streams where some other growth-critical factor may have been limiting. Proximity to saturation was correlated to background N concentration but was better predicted by the ratio of dissolved inorganic N (DIN) to soluble reactive phosphorus (SRP), suggesting phosphorus limitation in several high-N streams. Uptake velocity, a reflection of uptake efficiency, declined nonlinearly with increasing N amendment in all streams. At the same time, uptake velocity was highest in the low-N streams. Our conceptual model of N transport, uptake, and uptake efficiency suggests that, while streams may be active sites of N uptake on the landscape, N saturation contributes to nonlinear changes in stream N dynamics that correspond to decreased uptake efficiency.

  14. A FPGA-based signal processing unit for a GEM array detector

    International Nuclear Information System (INIS)

    Yen, W.W.; Chou, H.P.

    2013-06-01

    in the present study, a signal processing unit for a GEM one-dimensional array detector is presented to measure the trajectory of photoelectrons produced by cosmic X-rays. The present GEM array detector system has 16 signal channels. The front-end unit provides timing signals from trigger units and energy signals from charge sensitive amplifies. The prototype of the processing unit is implemented using commercial field programmable gate array circuit boards. The FPGA based system is linked to a personal computer for testing and data analysis. Tests using simulated signals indicated that the FPGA-based signal processing unit has a good linearity and is flexible for parameter adjustment for various experimental conditions (authors)

  15. Small-angle tomography algorithm for transmission inspection of acoustic linear array

    Directory of Open Access Journals (Sweden)

    Soldatov Alexey

    2016-01-01

    Full Text Available The paper describes the algorithm of reconstruction of tomographic image used in the through-transition method in a small angle sounding of acoustic linear arrays and the results of practical application of the proposed algorithm. In alternate probing of each element of emitting array and simultaneous reception of all elements of the receiving array is a collection of shadow images of the testing zone. The testing zone is divided into small local areas and using the collection of shadow images computed matrix normalized transmission coefficients for each of the small local area. Tomographic image control zone is obtained by submitting the resulting matrix of normalized transmission coefficients in grayscale or colors.

  16. Tunoe Knob wind turbine array. Visualization and aesthetic evaluation

    International Nuclear Information System (INIS)

    1994-09-01

    The aesthetic affects of locating a wind turbine array in the Danish coastal waters at Tunoe Knob between Tunoe and the Juttish east coast are discussed. The visualization project made use of a video film which analyzed the effect of the configurations of the wind turbine array on the coastal landscape as seen from a number viewpoints. A computer model illustrated the aesthetic effects of viewing the windmills as the viewer moves along the east coast of Jutland and across the sea towards the islands of Tunoe and Samsoe. The results should form the basis of the authorities' decision-making regarding the configuration of the wind turbine array. An account of the methods of visualization and a description of the visualization methods, in addition to the criteria on which the choice of the configuration was based are given. The chosen configuration is visualized, from points that are near and far from the wind turbine array, in the form of maps, diagrams and photographs of the sea and landscape. (AB)

  17. Multi-terabyte EIDE disk arrays running Linux RAID5

    International Nuclear Information System (INIS)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.; Godang, R.; Joy, M.D.; Summers, D.J.; Petravick, D.L.

    2004-01-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important

  18. Multi-terabyte EIDE disk arrays running Linux RAID5

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.; Godang, R.; Joy, M.D.; Summers, D.J.; /Mississippi U.; Petravick, D.L.; /Fermilab

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.

  19. Photoacoustic projection imaging using an all-optical detector array

    Science.gov (United States)

    Bauer-Marschallinger, J.; Felbermayer, K.; Berer, T.

    2018-02-01

    We present a prototype for all-optical photoacoustic projection imaging. By generating projection images, photoacoustic information of large volumes can be retrieved with less effort compared to common photoacoustic computed tomography where many detectors and/or multiple measurements are required. In our approach, an array of 60 integrating line detectors is used to acquire photoacoustic waves. The line detector array consists of fiber-optic MachZehnder interferometers, distributed on a cylindrical surface. From the measured variation of the optical path lengths of the interferometers, induced by photoacoustic waves, a photoacoustic projection image can be reconstructed. The resulting images represent the projection of the three-dimensional spatial light absorbance within the imaged object onto a two-dimensional plane, perpendicular to the line detector array. The fiber-optic detectors achieve a noise-equivalent pressure of 24 Pascal at a 10 MHz bandwidth. We present the operational principle, the structure of the array, and resulting images. The system can acquire high-resolution projection images of large volumes within a short period of time. Imaging large volumes at high frame rates facilitates monitoring of dynamic processes.

  20. Using hardware models to quantify sensory data acquisition across the rat vibrissal array.

    Science.gov (United States)

    Gopal, Venkatesh; Hartmann, Mitra J Z

    2007-12-01

    Our laboratory investigates how animals acquire sensory data to understand the neural computations that permit complex sensorimotor behaviors. We use the rat whisker system as a model to study active tactile sensing; our aim is to quantitatively describe the spatiotemporal structure of incoming sensory information to place constraints on subsequent neural encoding and processing. In the first part of this paper we describe the steps in the development of a hardware model (a 'sensobot') of the rat whisker array that can perform object feature extraction. We show how this model provides insights into the neurophysiology and behavior of the real animal. In the second part of this paper, we suggest that sensory data acquisition across the whisker array can be quantified using the complete derivative. We use the example of wall-following behavior to illustrate that computing the appropriate spatial gradients across a sensor array would enable an animal or mobile robot to predict the sensory data that will be acquired at the next time step.

  1. Bayesian Inference of Forces Causing Cytoplasmic Streaming in Caenorhabditis elegans Embryos and Mouse Oocytes.

    Science.gov (United States)

    Niwayama, Ritsuya; Nagao, Hiromichi; Kitajima, Tomoya S; Hufnagel, Lars; Shinohara, Kyosuke; Higuchi, Tomoyuki; Ishikawa, Takuji; Kimura, Akatsuki

    2016-01-01

    Cellular structures are hydrodynamically interconnected, such that force generation in one location can move distal structures. One example of this phenomenon is cytoplasmic streaming, whereby active forces at the cell cortex induce streaming of the entire cytoplasm. However, it is not known how the spatial distribution and magnitude of these forces move distant objects within the cell. To address this issue, we developed a computational method that used cytoplasm hydrodynamics to infer the spatial distribution of shear stress at the cell cortex induced by active force generators from experimentally obtained flow field of cytoplasmic streaming. By applying this method, we determined the shear-stress distribution that quantitatively reproduces in vivo flow fields in Caenorhabditis elegans embryos and mouse oocytes during meiosis II. Shear stress in mouse oocytes were predicted to localize to a narrower cortical region than that with a high cortical flow velocity and corresponded with the localization of the cortical actin cap. The predicted patterns of pressure gradient in both species were consistent with species-specific cytoplasmic streaming functions. The shear-stress distribution inferred by our method can contribute to the characterization of active force generation driving biological streaming.

  2. Array design and expression evaluation in POOMA II

    Energy Technology Data Exchange (ETDEWEB)

    Karmesin, S.; Crotinger, J.; Cummings, J.; Haney, S.; Humphrey, W.; Reynders, J.; Smith, S.; Williams, T.J.

    1998-12-31

    POOMA is a templated C++ class library for use in the development of large-scale scientific simulations on serial and parallel computers. POOMA II is a new design and implementation of POOMA intended to add richer capabilities and greater flexibility to the framework. The new design employs a generic Array class that acts as an interface to, or view on, a wide variety of data representation objects referred to as engines. This design separates the interface and the representation of multidimensional arrays. The separation is achieved using compile-time techniques rather than virtual functions, and thus code efficiency is maintained. POOMA II uses PETE, the Portable Expression Template Engine, to efficiently represent complex mathematical expressions involving arrays and other objects. The representation of expressions is kept separate from expression evaluation, allowing the use of multiple evaluator mechanisms that can support nested where-block constructs, hardware-specific optimizations and different run-time environments.

  3. Sampling phased array - a new technique for ultrasonic signal processing and imaging

    OpenAIRE

    Verkooijen, J.; Boulavinov, A.

    2008-01-01

    Over the past 10 years, the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called 'Sampling Phased Array', has been developed in the Fraunhofer Institute for Non-Destructive Testing([1]). It realises a unique approach of measurement and processing of ultrasonic signals. Th...

  4. Preliminary assessment of streamflow characteristics for selected streams at Fort Gordon, Georgia, 1999-2000

    Science.gov (United States)

    Stamey, Timothy C.

    2001-01-01

    In 1999, the U.S. Geological Survey, in cooperation with the U.S. Army Signal Center and Fort Gordon, began collection of periodic streamflow data at four streams on the military base to assess and estimate streamflow characteristics of those streams for potential water-supply sources. Simple and reliable methods of determining streamflow characteristics of selected streams on the military base are needed for the initial implementation of the Fort Gordon Integrated Natural Resources Management Plan. Long-term streamflow data from the Butler Creek streamflow gaging station were used along with several concurrent discharge measurements made at three selected partial-record streamflow stations on Fort Gordon to determine selected low-flow streamflow characteristics. Streamflow data were collected and analyzed using standard U.S. Geological Survey methods and computer application programs to verify the use of simple drainage area to discharge ratios, which were used to estimate the low-flow characteristics for the selected streams. Low-flow data computed based on daily mean streamflow include: mean discharges for consecutive 1-, 3-, 7-, 14-, and 30-day period and low-flow estimates of 7Q10, 30Q2, 60Q2, and 90Q2 recurrence intervals. Flow-duration data also were determined for the 10-, 30-, 50-, 70-, and 90-percent exceedence flows. Preliminary analyses of the streamflow indicate that the flow duration and selected low-flow statistics for the selected streams averages from about 0.15 to 2.27 cubic feet per square mile. The long-term gaged streamflow data indicate that the streamflow conditions for the period analyzed were in the 50- to 90-percent flow range, or in which streamflow would be exceeded about 50 to 90 percent of the time.

  5. Adjusting patients streaming initiated by a wait time threshold in emergency department for minimizing opportunity cost.

    Science.gov (United States)

    Kim, Byungjoon B J; Delbridge, Theodore R; Kendrick, Dawn B

    2017-07-10

    Purpose Two different systems for streaming patients were considered to improve efficiency measures such as waiting times (WTs) and length of stay (LOS) for a current emergency department (ED). A typical fast track area (FTA) and a fast track with a wait time threshold (FTW) were designed and compared effectiveness measures from the perspective of total opportunity cost of all patients' WTs in the ED. The paper aims to discuss these issues. Design/methodology/approach This retrospective case study used computerized ED patient arrival to discharge time logs (between July 1, 2009 and June 30, 2010) to build computer simulation models for the FTA and fast track with wait time threshold systems. Various wait time thresholds were applied to stream different acuity-level patients. National average wait time for each acuity level was considered as a threshold to stream patients. Findings The fast track with a wait time threshold (FTW) showed a statistically significant shorter total wait time than the current system or a typical FTA system. The patient streaming management would improve the service quality of the ED as well as patients' opportunity costs by reducing the total LOS in the ED. Research limitations/implications The results of this study were based on computer simulation models with some assumptions such as no transfer times between processes, an arrival distribution of patients, and no deviation of flow pattern. Practical implications When the streaming of patient flow can be managed based on the wait time before being seen by a physician, it is possible for patients to see a physician within a tolerable wait time, which would result in less crowded in the ED. Originality/value A new streaming scheme of patients' flow may improve the performance of fast track system.

  6. Generation of gamma-ray streaming kernels through cylindrical ducts via Monte Carlo method

    International Nuclear Information System (INIS)

    Kim, Dong Su

    1992-02-01

    Since radiation streaming through penetrations is often the critical consideration in protection against exposure of personnel in a nuclear facility, it has been of great concern in radiation shielding design and analysis. Several methods have been developed and applied to the analysis of the radiation streaming in the past such as ray analysis method, single scattering method, albedo method, and Monte Carlo method. But they may be used for order-of-magnitude calculations and where sufficient margin is available, except for the Monte Carlo method which is accurate but requires a lot of computing time. This study developed a Monte Carlo method and constructed a data library of solutions using the Monte Carlo method for radiation streaming through a straight cylindrical duct in concrete walls of a broad, mono-directional, monoenergetic gamma-ray beam of unit intensity. The solution named as plane streaming kernel is the average dose rate at duct outlet and was evaluated for 20 source energies from 0 to 10 MeV, 36 source incident angles from 0 to 70 degrees, 5 duct radii from 10 to 30 cm, and 16 wall thicknesses from 0 to 100 cm. It was demonstrated that average dose rate due to an isotropic point source at arbitrary positions can be well approximated using the plane streaming kernel with acceptable error. Thus, the library of the plane streaming kernels can be used for the accurate and efficient analysis of radiation streaming through a straight cylindrical duct in concrete walls due to arbitrary distributions of gamma-ray sources

  7. GPUs: An Emerging Platform for General-Purpose Computation

    Science.gov (United States)

    2007-08-01

    programming; real-time cinematic quality graphics Peak stream (26) License required (limited time no- cost evaluation program) Commercially...folding.stanford.edu (accessed 30 March 2007). 2. Fan, Z.; Qiu, F.; Kaufman, A.; Yoakum-Stover, S. GPU Cluster for High Performance Computing. ACM/IEEE...accessed 30 March 2007). 8. Goodnight, N.; Wang, R.; Humphreys, G. Computation on Programmable Graphics Hardware. IEEE Computer Graphics and

  8. Percent Forest Adjacent to Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  9. Percent Agriculture Adjacent to Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  10. Electrohydrodynamic actuation of co-flowing liquids by means of microelectrode arrays

    International Nuclear Information System (INIS)

    Garcia-Sanchez, Pablo; Ferney, Mathieu; Ramos, Antonio

    2011-01-01

    Electric fields induce forces at the interface between liquids with different electrical properties (conductivity and/or permittivity). We explore how to use these forces for manipulating two coflowing streams of liquids in a microchannel. A microelectrode array is fabricated at the bottom of the channel and one of the two liquids is labelled with a fluorescent dye for observing the phenomenon. The diffuse interface between the two liquids is deflected depending on the ac signal and conductivity (or permittivity) ratio between the liquids. Only a few volts are needed for observing the interface destabilization, in contrast with other electrode configurations where hundreds of volts are applied.

  11. Electrohydrodynamic actuation of co-flowing liquids by means of microelectrode arrays

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Sanchez, Pablo; Ferney, Mathieu; Ramos, Antonio, E-mail: pablogarcia@us.es [Depto. de Electronica y Electromagnetismo, University of Sevilla (Spain)

    2011-06-23

    Electric fields induce forces at the interface between liquids with different electrical properties (conductivity and/or permittivity). We explore how to use these forces for manipulating two coflowing streams of liquids in a microchannel. A microelectrode array is fabricated at the bottom of the channel and one of the two liquids is labelled with a fluorescent dye for observing the phenomenon. The diffuse interface between the two liquids is deflected depending on the ac signal and conductivity (or permittivity) ratio between the liquids. Only a few volts are needed for observing the interface destabilization, in contrast with other electrode configurations where hundreds of volts are applied.

  12. AHaH computing-from metastable switches to attractors to machine learning.

    Directory of Open Access Journals (Sweden)

    Michael Alexander Nugent

    Full Text Available Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.

  13. Hybrid Arrays for Chemical Sensing

    Science.gov (United States)

    Kramer, Kirsten E.; Rose-Pehrsson, Susan L.; Johnson, Kevin J.; Minor, Christian P.

    In recent years, multisensory approaches to environment monitoring for chemical detection as well as other forms of situational awareness have become increasingly popular. A hybrid sensor is a multimodal system that incorporates several sensing elements and thus produces data that are multivariate in nature and may be significantly increased in complexity compared to data provided by single-sensor systems. Though a hybrid sensor is itself an array, hybrid sensors are often organized into more complex sensing systems through an assortment of network topologies. Part of the reason for the shift to hybrid sensors is due to advancements in sensor technology and computational power available for processing larger amounts of data. There is also ample evidence to support the claim that a multivariate analytical approach is generally superior to univariate measurements because it provides additional redundant and complementary information (Hall, D. L.; Linas, J., Eds., Handbook of Multisensor Data Fusion, CRC, Boca Raton, FL, 2001). However, the benefits of a multisensory approach are not automatically achieved. Interpretation of data from hybrid arrays of sensors requires the analyst to develop an application-specific methodology to optimally fuse the disparate sources of data generated by the hybrid array into useful information characterizing the sample or environment being observed. Consequently, multivariate data analysis techniques such as those employed in the field of chemometrics have become more important in analyzing sensor array data. Depending on the nature of the acquired data, a number of chemometric algorithms may prove useful in the analysis and interpretation of data from hybrid sensor arrays. It is important to note, however, that the challenges posed by the analysis of hybrid sensor array data are not unique to the field of chemical sensing. Applications in electrical and process engineering, remote sensing, medicine, and of course, artificial

  14. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  15. Prototype of a production system for Cherenkov Telescope Array with DIRAC

    International Nuclear Information System (INIS)

    Arrabito, L; Bregeon, J; Haupt, A; Graciani Diaz, R; Stagni, F; Tsaregorodtsev, A

    2015-01-01

    The Cherenkov Telescope Array (CTA) — an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale — is the next generation instrument in the field of very high energy gamma-ray astronomy. CTA will operate as an open observatory providing data products to the scientific community. An average data stream of about 10 GB/s for about 1000 hours of observation per year, thus producing several PB/year, is expected. Large CPU time is required for data-processing as well for massive Monte Carlo simulations needed for detector calibration purposes. The current CTA computing model is based on a distributed infrastructure for the archive and the data off-line processing. In order to manage the off-line data-processing in a distributed environment, CTA has evaluated the DIRAC (Distributed Infrastructure with Remote Agent Control) system, which is a general framework for the management of tasks over distributed heterogeneous computing environments. In particular, a production system prototype has been developed, based on the two main DIRAC components, i.e. the Workload Management and Data Management Systems. After three years of successful exploitation of this prototype, for simulations and analysis, we proved that DIRAC provides suitable functionalities needed for the CTA data processing. Based on these results, the CTA development plan aims to achieve an operational production system, based on the DIRAC Workload Management System, to be ready for the start of CTA operation phase in 2017-2018. One more important challenge consists of the development of a fully automatized execution of the CTA workflows. For this purpose, we have identified a third DIRAC component, the so-called Transformation System, which offers very interesting functionalities to achieve this automatisation. The Transformation System is a ’data-driven’ system, allowing to automatically trigger data-processing and data management operations according to pre

  16. Numerical Simulation of the Diffusion Processes in Nanoelectrode Arrays Using an Axial Neighbor Symmetry Approximation.

    Science.gov (United States)

    Peinetti, Ana Sol; Gilardoni, Rodrigo S; Mizrahi, Martín; Requejo, Felix G; González, Graciela A; Battaglini, Fernando

    2016-06-07

    Nanoelectrode arrays have introduced a complete new battery of devices with fascinating electrocatalytic, sensitivity, and selectivity properties. To understand and predict the electrochemical response of these arrays, a theoretical framework is needed. Cyclic voltammetry is a well-fitted experimental technique to understand the undergoing diffusion and kinetics processes. Previous works describing microelectrode arrays have exploited the interelectrode distance to simulate its behavior as the summation of individual electrodes. This approach becomes limited when the size of the electrodes decreases to the nanometer scale due to their strong radial effect with the consequent overlapping of the diffusional fields. In this work, we present a computational model able to simulate the electrochemical behavior of arrays working either as the summation of individual electrodes or being affected by the overlapping of the diffusional fields without previous considerations. Our computational model relays in dividing a regular electrode array in cells. In each of them, there is a central electrode surrounded by neighbor electrodes; these neighbor electrodes are transformed in a ring maintaining the same active electrode area than the summation of the closest neighbor electrodes. Using this axial neighbor symmetry approximation, the problem acquires a cylindrical symmetry, being applicable to any diffusion pattern. The model is validated against micro- and nanoelectrode arrays showing its ability to predict their behavior and therefore to be used as a designing tool.

  17. Evolution and interaction of large interplanetary streams

    International Nuclear Information System (INIS)

    Whang, Y.C.; Burlaga, L.F.

    1985-02-01

    A computer simulation for the evolution and interaction of large interplanetary streams based on multi-spacecraft observations and an unsteady, one-dimensional MHD model is presented. Two events, each observed by two or more spacecraft separated by a distance of the order of 10 AU, were studied. The first simulation is based on the plasma and magnetic field observations made by two radially-aligned spacecraft. The second simulation is based on an event observed first by Helios-1 in May 1980 near 0.6 AU and later by Voyager-1 in June 1980 at 8.1 AU. These examples show that the dynamical evolution of large-scale solar wind structures is dominated by the shock process, including the formation, collision, and merging of shocks. The interaction of shocks with stream structures also causes a drastic decrease in the amplitude of the solar wind speed variation with increasing heliocentric distance, and as a result of interactions there is a large variation of shock-strengths and shock-speeds. The simulation results shed light on the interpretation for the interaction and evolution of large interplanetary streams. Observations were made along a few limited trajectories, but simulation results can supplement these by providing the detailed evolution process for large-scale solar wind structures in the vast region not directly observed. The use of a quantitative nonlinear simulation model including shock merging process is crucial in the interpretation of data obtained in the outer heliosphere

  18. Morphology of a Wetland Stream

    Science.gov (United States)

    Jurmu; Andrle

    1997-11-01

    / Little attention has been paid to wetland stream morphology in the geomorphological and environmental literature, and in the recently expanding wetland reconstruction field, stream design has been based primarily on stream morphologies typical of nonwetland alluvial environments. Field investigation of a wetland reach of Roaring Brook, Stafford, Connecticut, USA, revealed several significant differences between the morphology of this stream and the typical morphology of nonwetland alluvial streams. Six morphological features of the study reach were examined: bankfull flow, meanders, pools and riffles, thalweg location, straight reaches, and cross-sectional shape. It was found that bankfull flow definitions originating from streams in nonwetland environments did not apply. Unusual features observed in the wetland reach include tight bends and a large axial wavelength to width ratio. A lengthy straight reach exists that exceeds what is typically found in nonwetland alluvial streams. The lack of convex bank point bars in the bends, a greater channel width at riffle locations, an unusual thalweg location, and small form ratios (a deep and narrow channel) were also differences identified. Further study is needed on wetland streams of various regions to determine if differences in morphology between alluvial and wetland environments can be applied in order to improve future designs of wetland channels.KEY WORDS: Stream morphology; Wetland restoration; Wetland creation; Bankfull; Pools and riffles; Meanders; Thalweg

  19. CFD simulation of rotor aerodynamic performance when using additional surface structure array

    Science.gov (United States)

    Wang, Bing; Kong, Deyi

    2017-10-01

    The present work analyses the aerodynamic performance of the rotor with additional surface structure array in an attempt to maximize its performance in hover flight. The unstructured grids and the Reynolds Average Navier-Stokes equations were used to calculate the performance of the prototype rotor and the rotor with additional surface structure array in the air. The computational fluid dynamics software FLUENT was used to simulate the thrust of the rotors. The results of the calculations are in reasonable agreement with experimental data, which shows that the calculation model used in this work is useful in simulating the performance of the rotor with additional surface structure array. With this theoretical calculation model, the thrusts of the rotors with arrays of surface structure in three different shapes were calculated. According to the simulation results and the experimental data, the rotor with triangle surface structure array has better aerodynamic performance than the other rotors. In contrast with the prototype rotor, the thrust of the rotor with triangle surface structure array increases by 5.2% at the operating rotating speed of 3000r/min, and the additional triangle surface structure array has almost no influence on the efficiency of the rotor.

  20. Human impacts to mountain streams

    Science.gov (United States)

    Wohl, Ellen

    2006-09-01

    Mountain streams are here defined as channel networks within mountainous regions of the world. This definition encompasses tremendous diversity of physical and biological conditions, as well as history of land use. Human effects on mountain streams may result from activities undertaken within the stream channel that directly alter channel geometry, the dynamics of water and sediment movement, contaminants in the stream, or aquatic and riparian communities. Examples include channelization, construction of grade-control structures or check dams, removal of beavers, and placer mining. Human effects can also result from activities within the watershed that indirectly affect streams by altering the movement of water, sediment, and contaminants into the channel. Deforestation, cropping, grazing, land drainage, and urbanization are among the land uses that indirectly alter stream processes. An overview of the relative intensity of human impacts to mountain streams is provided by a table summarizing human effects on each of the major mountainous regions with respect to five categories: flow regulation, biotic integrity, water pollution, channel alteration, and land use. This table indicates that very few mountains have streams not at least moderately affected by land use. The least affected mountainous regions are those at very high or very low latitudes, although our scientific ignorance of conditions in low-latitude mountains in particular means that streams in these mountains might be more altered than is widely recognized. Four case studies from northern Sweden (arctic region), Colorado Front Range (semiarid temperate region), Swiss Alps (humid temperate region), and Papua New Guinea (humid tropics) are also used to explore in detail the history and effects on rivers of human activities in mountainous regions. The overview and case studies indicate that mountain streams must be managed with particular attention to upstream/downstream connections, hillslope

  1. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  2. Pollutant Dispersion Modeling in Natural Streams Using the Transmission Line Matrix Method

    Directory of Open Access Journals (Sweden)

    Safia Meddah

    2015-09-01

    Full Text Available Numerical modeling has become an indispensable tool for solving various physical problems. In this context, we present a model of pollutant dispersion in natural streams for the far field case where dispersion is considered longitudinal and one-dimensional in the flow direction. The Transmission Line Matrix (TLM, which has earned a reputation as powerful and efficient numerical method, is used. The presented one-dimensional TLM model requires a minimum input data and provides a significant gain in computing time. To validate our model, the results are compared with observations and experimental data from the river Severn (UK. The results show a good agreement with experimental data. The model can be used to predict the spatiotemporal evolution of a pollutant in natural streams for effective and rapid decision-making in a case of emergency, such as accidental discharges in a stream with a dynamic similar to that of the river Severn (UK.

  3. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Andrea Trucco

    2015-06-01

    Full Text Available For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed. In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.

  4. Design of 3x3 Focusing Array for Heavy Ion Driver Final Report on CRADA TC-02082-04

    Energy Technology Data Exchange (ETDEWEB)

    Martovetsky, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-28

    This memo presents a design of a 3x3 quadrupole array for HIF. It contains 3 D magnetic field computations of the array build with racetrack coils with and without different shields. It is shown that it is possible to have a low error magnetic field in the cells and shield the stray fields to acceptable levels. The array design seems to be a practical solution to any size array for future multi-beam heavy ion fusion drivers.

  5. Potential Impacts of Climate Change on Stream Water Temperatures Across the United States

    Science.gov (United States)

    Ehsani, N.; Knouft, J.; Ficklin, D. L.

    2017-12-01

    Analyses of long-term observation data have revealed significant changes in several components of climate and the hydrological cycle over the contiguous United States during the twentieth and early twenty-first century. Mean surface air temperatures have significantly increased in most areas of the country. In addition, water temperatures are increasing in many watersheds across the United States. While there are numerous studies assessing the impact of climate change on air temperatures at regional and global scales, fewer studies have investigated the impacts of climate change on stream water temperatures. Projecting increases in water temperature are particularly important to the conservation of freshwater ecosystems. To achieve better insights into attributes regulating population and community dynamics of aquatic biota at large spatial and temporal scales, we need to establish relationships between environmental heterogeneity and critical biological processes of stream ecosystems at these scales. Increases in stream temperatures caused by the doubling of atmospheric carbon dioxide may result in a significant loss of fish habitat in the United States. Utilization of physically based hydrological-water temperature models is computationally demanding and can be onerous to many researchers who specialize in other disciplines. Using statistical techniques to analyze observational data from 1760 USGS stream temperature gages, our goal is to develop a simple yet accurate method to quantify the impacts of climate warming on stream water temperatures in a way that is practical for aquatic biologists, water and environmental management purposes, and conservation practitioners and policy-makers. Using an ensemble of five global climate models (GCMs), we estimate the potential impacts of climate change on stream temperatures within the contiguous United States based on recent trends. Stream temperatures are projected to increase across the US, but the magnitude of the

  6. How and Why Does Stream Water Temperature Vary at Small Spatial Scales in a Headwater Stream?

    Science.gov (United States)

    Morgan, J. C.; Gannon, J. P.; Kelleher, C.

    2017-12-01

    The temperature of stream water is controlled by climatic variables, runoff/baseflow generation, and hyporheic exchange. Hydrologic conditions such as gaining/losing reaches and sources of inflow can vary dramatically along a stream on a small spatial scale. In this work, we attempt to discern the extent that the factors of air temperature, groundwater inflow, and precipitation influence stream temperature at small spatial scales along the length of a stream. To address this question, we measured stream temperature along the perennial stream network in a 43 ha catchment with a complex land use history in Cullowhee, NC. Two water temperature sensors were placed along the stream network on opposite sides of the stream at 100-meter intervals and at several locations of interest (i.e. stream junctions). The forty total sensors recorded the temperature every 10 minutes for one month in the spring and one month in the summer. A subset of sampling locations where stream temperature was consistent or varied from one side of the stream to the other were explored with a thermal imaging camera to obtain a more detailed representation of the spatial variation in temperature at those sites. These thermal surveys were compared with descriptions of the contributing area at the sample sites in an effort to discern specific causes of differing flow paths. Preliminary results suggest that on some branches of the stream stormflow has less influence than regular hyporheic exchange, while other tributaries can change dramatically with stormflow conditions. We anticipate this work will lead to a better understanding of temperature patterns in stream water networks. A better understanding of the importance of small-scale differences in flow paths to water temperature may be able to inform watershed management decisions in the future.

  7. Sparse Array Angle Estimation Using Reduced-Dimension ESPRIT-MUSIC in MIMO Radar

    Directory of Open Access Journals (Sweden)

    Chaozhu Zhang

    2013-01-01

    Full Text Available Sparse linear arrays provide better performance than the filled linear arrays in terms of angle estimation and resolution with reduced size and low cost. However, they are subject to manifold ambiguity. In this paper, both the transmit array and receive array are sparse linear arrays in the bistatic MIMO radar. Firstly, we present an ESPRIT-MUSIC method in which ESPRIT algorithm is used to obtain ambiguous angle estimates. The disambiguation algorithm uses MUSIC-based procedure to identify the true direction cosine estimate from a set of ambiguous candidate estimates. The paired transmit angle and receive angle can be estimated and the manifold ambiguity can be solved. However, the proposed algorithm has high computational complexity due to the requirement of two-dimension search. Further, the Reduced-Dimension ESPRIT-MUSIC (RD-ESPRIT-MUSIC is proposed to reduce the complexity of the algorithm. And the RD-ESPRIT-MUSIC only demands one-dimension search. Simulation results demonstrate the effectiveness of the method.

  8. Sparse array angle estimation using reduced-dimension ESPRIT-MUSIC in MIMO radar.

    Science.gov (United States)

    Zhang, Chaozhu; Pang, Yucai

    2013-01-01

    Sparse linear arrays provide better performance than the filled linear arrays in terms of angle estimation and resolution with reduced size and low cost. However, they are subject to manifold ambiguity. In this paper, both the transmit array and receive array are sparse linear arrays in the bistatic MIMO radar. Firstly, we present an ESPRIT-MUSIC method in which ESPRIT algorithm is used to obtain ambiguous angle estimates. The disambiguation algorithm uses MUSIC-based procedure to identify the true direction cosine estimate from a set of ambiguous candidate estimates. The paired transmit angle and receive angle can be estimated and the manifold ambiguity can be solved. However, the proposed algorithm has high computational complexity due to the requirement of two-dimension search. Further, the Reduced-Dimension ESPRIT-MUSIC (RD-ESPRIT-MUSIC) is proposed to reduce the complexity of the algorithm. And the RD-ESPRIT-MUSIC only demands one-dimension search. Simulation results demonstrate the effectiveness of the method.

  9. CAMS: OLAPing Multidimensional Data Streams Efficiently

    Science.gov (United States)

    Cuzzocrea, Alfredo

    In the context of data stream research, taming the multidimensionality of real-life data streams in order to efficiently support OLAP analysis/mining tasks is a critical challenge. Inspired by this fundamental motivation, in this paper we introduce CAMS (C ube-based A cquisition model for M ultidimensional S treams), a model for efficiently OLAPing multidimensional data streams. CAMS combines a set of data stream processing methodologies, namely (i) the OLAP dimension flattening process, which allows us to obtain dimensionality reduction of multidimensional data streams, and (ii) the OLAP stream aggregation scheme, which aggregates data stream readings according to an OLAP-hierarchy-based membership approach. We complete our analytical contribution by means of experimental assessment and analysis of both the efficiency and the scalability of OLAPing capabilities of CAMS on synthetic multidimensional data streams. Both analytical and experimental results clearly connote CAMS as an enabling component for next-generation Data Stream Management Systems.

  10. Hubble gets new ESA-supplied solar arrays

    Science.gov (United States)

    1993-12-01

    Derek Eaton, ESA project manager, was overjoyed with the success of the day's spacewalk. "To build two such massive arrays some years apart to such tight tolerances and have one replace the other with so few problems is a tribute to the design and manufacturing skills of ESA and British Aerospace, the prime contractor for the arrays", he said. "The skill of Kathy and Tom contributed greatly to this success". The astronauts began their spacewalk at 09h30 p.m. CST (04h30 a.m. CET, Monday). Their first task was to jettison the troublesome solar array that failed to retract yesterday. Perched on the end of the shuttle's robot arm, 7.5 metres above the cargo bay, Thornton carefully released the array. ESA astronaut Claude Nicollier then pulled the arm away from the free-floating panel and mission commander Dick Covey fired the shuttle's thrusters to back away. Endeavour and the discarded array are moving apart at a rate of 18.5 kilometres each 90-minute orbit of the Earth. The array is expected to burn up in the Earth's atmosphere harmlessly within a year or so. The astronauts had no problems installing the new arrays and stowing the left-hand wing in the cargo bay for the return to Earth. The new arrays will remain rolled-up against the side of the telescope until the fifth spacewalk on Wednesday/Thursday. The telescope itself will be deployed on Saturday. The telescope's first set of arrays flexed in orbit because of the sudden swing in temperature as the craft moved in and out of sunlight. The movement, or "jitter", affected the telescope's pointing system and disrupted observations at times. The Space Telescope Operations Control Centre largely compensated for the problem with special software but this occupied a large amount of computer memory. The new arrays incorporate three major changes to eliminate the problem. The metal bi-stem booms, which support the solar blankets, is protected from extreme temperature changes by a concertina-style sleeve made up of one

  11. Nondestructive, energy-dispersive x-ray fluorescence analysis of product-stream concentrations from reprocessed LWR fuels

    International Nuclear Information System (INIS)

    Camp, D.C.; Ruhter, W.D.; Benjamin, S.

    1979-01-01

    Energy-dispersive x-ray fluorescence analysis can be used for quantitative on-line monitoring of the product concentrations in single- or dual-element process streams in a reprocessing plant. The 122-keV gamma ray from 57 Co is used to excite the K x-rays of uranium and/or plutonium in nitric acid solution streams. A collimated HPGe detector is used to measure the excited x-ray intensities. Net solution radioactivity may be measured by eclipsing the exciting radiation, or by measuring it simultaneously with a second detector. The technique is nondestructive and noninvasive, and is easily adapted directly to pipes containing the solution of interest. The dynamic range of the technique extends from below 1 to 500 g/l. Measurement times depend on concentration, but better than 1% counting statistics can be obtained in 100 s for 400 g/l concentrations, and in 1000 s for as little as 10 g/l. Calibration accuracies of 0.3% or better over the entire dynamic range can be achieved easily using carefully prepared standards. Computer-based analysis equipment allows concentration changes in flowing streams to be dynamically monitored. Changes in acid normality of the stream will affect the concentration determined, hence it must also be determined by measuring the intensity of a transmitted 57 Co beam. The computer/disk-based pulse-height analysis system allows all necessary calculations to be done on-line. Experimental requirements for an in-plant installation or a test and evaluation are discussed

  12. An accurate projection algorithm for array processor based SPECT systems

    International Nuclear Information System (INIS)

    King, M.A.; Schwinger, R.B.; Cool, S.L.

    1985-01-01

    A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT

  13. Smart photodetector arrays for error control in page-oriented optical memory

    Science.gov (United States)

    Schaffer, Maureen Elizabeth

    1998-12-01

    Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data

  14. Spectrochemical determination of beryllium and lithium in stream sediments

    International Nuclear Information System (INIS)

    Gallimore, D.L.; Hues, A.D.; Palmer, B.A.; Cox, L.E.; Simi, O.R.; Bieniewski, T.M.; Steinhaus, D.W.

    1979-11-01

    A spectrochemical method was developed to analyze 200 or more samples of stream sediments per day for beryllium and lithium. One part of ground stream sediment is mixed with two parts graphite-SiO 2 buffer, packed into a graphite electrode, and excited in a direct-current arc. The resulting emission goes to a 3.4-m, direct-reading, Ebert spectrograph. A desk-top computer system is used to record and process the signals, and to report the beryllium and lithium concentrations. The limits of detection are 0.2 μg/g for beryllium and 0.5 μg/g for lithium. For analyses of prepared reference materials, the relative standard deviations were 16% for determining 0.2 to 100 μg/g of beryllium and 15% for determining 0.5 to 500 μg/g of lithium. A correction is made for vanadium interference

  15. Three-dimensional model of corotating streams in the solar wind 3. Magnetohydrodynamic streams

    International Nuclear Information System (INIS)

    Pizzo, V.J.

    1982-01-01

    The focus of this paper is two-fold: (1) to examine how the presence of the spiral magnetic field affects the evolution of interplanetary corotating solar wind streams, and (2) to ascertain the nature of secondary large-scale phenomena likely to be associated with streams having a pronounced three-dimensional (3-D) structure. The dynamics are presumed to be governed by the nonlinear polytropic, single-fluid, 3-D MHD equations. Solutions are obtained with an explicit, Eulerian, finite differences technique that makes use of a simple form of artificial diffusion for handling shocks. For smooth axisymmetric flows, the picture of magnetically induced meridional motions previously established by linear models requires only minor correction. In the case of broad 3-D streams input near the sun, inclusion of the magnetic field is found to retard the kinematic steepening at the stream front substantially but to produce little deviation from planar flow. For the more realistic case of initially sharply bounded streams, however, it becomes essential to account for magnetic effects in the formulation. Whether a full 3-D treatment is required depends upon the latitudinal geometry of the stream

  16. Cytoplasmic Streaming in the Drosophila Oocyte.

    Science.gov (United States)

    Quinlan, Margot E

    2016-10-06

    Objects are commonly moved within the cell by either passive diffusion or active directed transport. A third possibility is advection, in which objects within the cytoplasm are moved with the flow of the cytoplasm. Bulk movement of the cytoplasm, or streaming, as required for advection, is more common in large cells than in small cells. For example, streaming is observed in elongated plant cells and the oocytes of several species. In the Drosophila oocyte, two stages of streaming are observed: relatively slow streaming during mid-oogenesis and streaming that is approximately ten times faster during late oogenesis. These flows are implicated in two processes: polarity establishment and mixing. In this review, I discuss the underlying mechanism of streaming, how slow and fast streaming are differentiated, and what we know about the physiological roles of the two types of streaming.

  17. Low-redundancy linear arrays in mirrored interferometric aperture synthesis.

    Science.gov (United States)

    Zhu, Dong; Hu, Fei; Wu, Liang; Li, Jun; Lang, Liang

    2016-01-15

    Mirrored interferometric aperture synthesis (MIAS) is a novel interferometry that can improve spatial resolution compared with that of conventional IAS. In one-dimensional (1-D) MIAS, antenna array with low redundancy has the potential to achieve a high spatial resolution. This Letter presents a technique for the direct construction of low-redundancy linear arrays (LRLAs) in MIAS and derives two regular analytical patterns that can yield various LRLAs in short computation time. Moreover, for a better estimation of the observed scene, a bi-measurement method is proposed to handle the rank defect associated with the transmatrix of those LRLAs. The results of imaging simulation demonstrate the effectiveness of the proposed method.

  18. In-stream Physical Heterogeneity, Rainfall Aided Flushing, and Discharge on Stream Water Quality.

    Science.gov (United States)

    Gomes, Pattiyage I A; Wai, Onyx W H

    2015-08-01

    Implications of instream physical heterogeneity, rainfall-aided flushing, and stream discharge on water quality control have been investigated in a headwater stream of a climatic region that has contrasting dry and wet seasons. Dry (low flow) season's physical heterogeneity showed a positive correlation with good water quality. However, in the wet season, physical heterogeneity showed minor or no significance on water quality variations. Furthermore, physical heterogeneity appeared to be more complementary with good water quality subsequent to rainfall events. In many cases stream discharge was a reason for poor water quality. For the dry season, graywater inputs to the stream could be held responsible. In the wet season, it was probably the result of catchment level disturbances (e.g., regulation of ephemeral freshwater paths). Overall, this study revealed the importance of catchment-based approaches on water quality improvement in tandem with in-stream approaches framed on a temporal scale.

  19. Stream II-V5: Revision Of Stream II-V4 To Account For The Effects Of Rainfall Events

    International Nuclear Information System (INIS)

    Chen, K.

    2010-01-01

    STREAM II-V4 is the aqueous transport module currently used by the Savannah River Site emergency response Weather Information Display (WIND) system. The transport model of the Water Quality Analysis Simulation Program (WASP) was used by STREAM II to perform contaminant transport calculations. WASP5 is a US Environmental Protection Agency (EPA) water quality analysis program that simulates contaminant transport and fate through surface water. STREAM II-V4 predicts peak concentration and peak concentration arrival time at downstream locations for releases from the SRS facilities to the Savannah River. The input flows for STREAM II-V4 are derived from the historical flow records measured by the United States Geological Survey (USGS). The stream flow for STREAM II-V4 is fixed and the flow only varies with the month in which the releases are taking place. Therefore, the effects of flow surge due to a severe storm are not accounted for by STREAM II-V4. STREAM II-V4 has been revised to account for the effects of a storm event. The steps used in this method are: (1) generate rainfall hyetographs as a function of total rainfall in inches (or millimeters) and rainfall duration in hours; (2) generate watershed runoff flow based on the rainfall hyetographs from step 1; (3) calculate the variation of stream segment volume (cross section) as a function of flow from step 2; (4) implement the results from steps 2 and 3 into the STREAM II model. The revised model (STREAM II-V5) will find the proper stream inlet flow based on the total rainfall and rainfall duration as input by the user. STREAM II-V5 adjusts the stream segment volumes (cross sections) based on the stream inlet flow. The rainfall based stream flow and the adjusted stream segment volumes are then used for contaminant transport calculations.

  20. Stream Clustering of Growing Objects

    Science.gov (United States)

    Siddiqui, Zaigham Faraz; Spiliopoulou, Myra

    We study incremental clustering of objects that grow and accumulate over time. The objects come from a multi-table stream e.g. streams of Customer and Transaction. As the Transactions stream accumulates, the Customers’ profiles grow. First, we use an incremental propositionalisation to convert the multi-table stream into a single-table stream upon which we apply clustering. For this purpose, we develop an online version of K-Means algorithm that can handle these swelling objects and any new objects that arrive. The algorithm also monitors the quality of the model and performs re-clustering when it deteriorates. We evaluate our method on the PKDD Challenge 1999 dataset.

  1. LHCb trigger streams optimization

    Science.gov (United States)

    Derkach, D.; Kazeev, N.; Neychev, R.; Panin, A.; Trofimov, I.; Ustyuzhanin, A.; Vesterinen, M.

    2017-10-01

    The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.

  2. Computational analysis of vertical axis wind turbine arrays

    Science.gov (United States)

    Bremseth, J.; Duraisamy, K.

    2016-10-01

    Canonical problems involving single, pairs, and arrays of vertical axis wind turbines (VAWTs) are investigated numerically with the objective of understanding the underlying flow structures and their implications on energy production. Experimental studies by Dabiri (J Renew Sustain Energy 3, 2011) suggest that VAWTs demand less stringent spacing requirements than their horizontal axis counterparts and additional benefits may be obtained by optimizing the placement and rotational direction of VAWTs. The flowfield of pairs of co-/counter-rotating VAWTs shows some similarities with pairs of cylinders in terms of wake structure and vortex shedding. When multiple VAWTs are placed in a column, the extent of the wake is seen to spread further downstream, irrespective of the direction of rotation of individual turbines. However, the aerodynamic interference between turbines gives rise to regions of excess momentum between the turbines which lead to significant power augmentations. Studies of VAWTs arranged in multiple columns show that the downstream columns can actually be more efficient than the leading column, a proposition that could lead to radical improvements in wind farm productivity.

  3. SNP Arrays

    Directory of Open Access Journals (Sweden)

    Jari Louhelainen

    2016-10-01

    Full Text Available The papers published in this Special Issue “SNP arrays” (Single Nucleotide Polymorphism Arrays focus on several perspectives associated with arrays of this type. The range of papers vary from a case report to reviews, thereby targeting wider audiences working in this field. The research focus of SNP arrays is often human cancers but this Issue expands that focus to include areas such as rare conditions, animal breeding and bioinformatics tools. Given the limited scope, the spectrum of papers is nothing short of remarkable and even from a technical point of view these papers will contribute to the field at a general level. Three of the papers published in this Special Issue focus on the use of various SNP array approaches in the analysis of three different cancer types. Two of the papers concentrate on two very different rare conditions, applying the SNP arrays slightly differently. Finally, two other papers evaluate the use of the SNP arrays in the context of genetic analysis of livestock. The findings reported in these papers help to close gaps in the current literature and also to give guidelines for future applications of SNP arrays.

  4. A Characterization and Evaluation of Coal Liquefaction Process Streams

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-10-01

    An updated assessment of the physico-chemical analytical methodology applicable to coal-liquefaction product streams and a review of the literature dealing with the modeling of fossil-fuel resid conversion to product oils are presented in this document. In addition, a summary is provided for the University of Delaware program conducted under this contract to develop an empirical test to determine relative resid reactivity and to construct a computer model to describe resid structure and predict reactivity.

  5. Sound field control with a circular double-layer array of loudspeakers

    DEFF Research Database (Denmark)

    Chang, Jiho; Jacobsen, Finn

    2012-01-01

    , and their performance is examined using computer simulations. Two performance indices are used in this work, (a) the level difference between the average sound energy density in the listening zone and that in the quiet zone (sometimes called “the acoustic contrast”), and (b) a normalized measure of the deviations...... between the desired and the generated sound field in the listening zone. It is concluded that the best compromise is obtained with a method that combines pure contrast maximization with a pressure matching technique.......This paper describes a method of generating a controlled sound field for listeners inside a circular array of loudspeakers without disturbing people outside the array appreciably. To achieve this objective, a double-layer array of loudspeakers is used. Several solution methods are suggested...

  6. The ventral stream offers more affordance and the dorsal stream more memory than believed

    NARCIS (Netherlands)

    Postma, Albert; van der Lubbe, Robert Henricus Johannes; Zuidhoek, Sander

    2002-01-01

    Opposed to Norman's proposal, processing of affordance is likely to occur not solely in the dorsal stream but also in the ventral stream. Moreover, the dorsal stream might do more than just serve an important role in motor actions. It supports egocentric location coding as well. As such, it would

  7. The role of remediation, natural alkalinity sources and physical stream parameters in stream recovery.

    Science.gov (United States)

    Kruse, Natalie A; DeRose, Lisa; Korenowsky, Rebekah; Bowman, Jennifer R; Lopez, Dina; Johnson, Kelly; Rankin, Edward

    2013-10-15

    Acid mine drainage (AMD) negatively impacts not only stream chemistry, but also aquatic biology. The ultimate goal of AMD treatment is restoration of the biological community, but that goal is rarely explicit in treatment system design. Hewett Fork in Raccoon Creek Watershed, Ohio, has been impacted by historic coal mining and has been treated with a calcium oxide doser in the headwaters of the watershed since 2004. All of the acidic inputs are isolated to a 1.5 km stretch of stream in the headwaters of the Hewett Fork watershed. The macroinvertebrate and fish communities have begun to recover and it is possible to distinguish three zones downstream of the doser: an impaired zone, a transition zone and a recovered zone. Alkalinity from both the doser and natural sources and physical stream parameters play a role in stream restoration. In Hewett Fork, natural alkaline additions downstream are higher than those from the doser. Both, alkaline additions and stream velocity drive sediment and metal deposition. Metal deposition occurs in several patterns; aluminum tends to deposit in regions of low stream velocity, while iron tends to deposit once sufficient alkalinity is added to the system downstream of mining inputs. The majority of metal deposition occurs upstream of the recovered zone. Both the physical stream parameters and natural alkalinity sources influence biological recovery in treated AMD streams and should be considered in remediation plans. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Multiobjective heat exchanger network synthesis based on grouping of process streams

    Energy Technology Data Exchange (ETDEWEB)

    Laukkanen, T.P.

    2012-06-15

    Heat exchanger network synthesis (HENS) is an important process synthesis problem and different tools and methods have been presented to solve this synthesis problem. This is mainly due to its importance in achieving energy savings in industrial processes in a cost-efficient way. The problem is also hard to solve and has been proven NP-hard (Nondeterministic Polynomial-time) and hence it is not known if a computationally efficient (polynomial) algorithm to solve the problem exists. Thus methods that provide good approximate solutions with reasonable computational requirements are useful. The objective of this thesis is to present new HENS approaches that are able to generate good solutions for HENS problems in a computationally efficient way so that all the objectives of HENS are optimized simultaneously. The main approach in accomplishing this objective is by grouping process streams. This is done either on the basis of the fact that in reality the process streams belong to a specific group or these groups are artificially developed. In the latter approach the idea is to decompose the set of binary variables i.e., the variables that define the existence of heat exchanger matches, into two separate problems. In this way the number of different options to connect the streams decreases compared to the situation where no decomposition is present. This causes the solution time to decrease and provides options for solving larger HENS problems. In this work the multiobjective HENS problem is solved either with the traditional weighting method or with an interactive multiobjective optimization method. In the weighting method the weights are the annual costs of the different objectives. In the interactive multiobjective optimization method the Decision Maker (DM) controls the decision-making process by classifying the objectives at each iteration. This multiobjective approach provides the benefit of using interactive multiobjective optimization, so that it is possible to

  9. Optimizing the performance of streaming numerical kernels on the IBM Blue Gene/P PowerPC 450 processor

    KAUST Repository

    Malas, Tareq Majed Yasin; Ahmadia, Aron; Brown, Jed; Gunnels, John A.; Keyes, David E.

    2012-01-01

    Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution

  10. Statistically optimized near field acoustic holography using an array of pressure-velocity probes

    DEFF Research Database (Denmark)

    Jacobsen, Finn; Jaud, Virginie

    2007-01-01

    of a measurement aperture that extends well beyond the source can be relaxed. Both NAH and SONAH are based on the assumption that all sources are on one side of the measurement plane whereas the other side is source free. An extension of the SONAH procedure based on measurement with a double layer array...... of pressure microphones has been suggested. The double layer technique makes it possible to distinguish between sources on the two sides of the array and thus suppress the influence of extraneous noise coming from the “wrong” side. It has also recently been demonstrated that there are significant advantages...... in NAH based on an array of acoustic particle velocity transducers (in a single layer) compared with NAH based on an array of pressure microphones. This investigation combines the two ideas and examines SONAH based on an array of pressure-velocity intensity probes through computer simulations as well...

  11. Statistical methods and computing for big data

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  12. Statistical methods and computing for big data.

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.

  13. A novel method to design sparse linear arrays for ultrasonic phased array.

    Science.gov (United States)

    Yang, Ping; Chen, Bin; Shi, Ke-Ren

    2006-12-22

    In ultrasonic phased array testing, a sparse array can increase the resolution by enlarging the aperture without adding system complexity. Designing a sparse array involves choosing the best or a better configuration from a large number of candidate arrays. We firstly designed sparse arrays by using a genetic algorithm, but found that the arrays have poor performance and poor consistency. So, a method based on the Minimum Redundancy Linear Array was then adopted. Some elements are determined by the minimum-redundancy array firstly in order to ensure spatial resolution and then a genetic algorithm is used to optimize the remaining elements. Sparse arrays designed by this method have much better performance and consistency compared to the arrays designed only by a genetic algorithm. Both simulation and experiment confirm the effectiveness.

  14. SING-dialoque subsystem for graphical representation of one-dimensional array contents

    International Nuclear Information System (INIS)

    Karlov, A.A.; Kirilov, A.S.

    1979-01-01

    General principles of organization and main features of dialogue subsystem for graphical representation of one-dimensional array contents are considered. The subsystem is developed for remote display station of the JINR BESM-6 computer. Some examples of using the subsystem for drawing curves and histograms are given. The subsystem is developed according to modern dialogue systems requirements. It is ''open'' for extension and could be installed into other computers [ru

  15. Detecting fluid leakage of a reservoir dam based on streaming self-potential measurements

    Science.gov (United States)

    Song, Seo Young; Kim, Bitnarae; Nam, Myung Jin; Lim, Sung Keun

    2015-04-01

    Between many reservoir dams for agriculture in suburban area of South Korea, water leakage has been reported several times. The dam under consideration in this study, which is located in Gyeong-buk, in the south-east of the Korean Peninsula, was reported to have a large leakage at the right foot of downstream side of the reservoir dam. For the detection of the leakage, not only geological survey but also geophysical explorations have been made for precision safety diagnosis, since the leakage can lead to dam failure. Geophysical exploration includes both electrical-resistivity and self-potential surveys, while geological surveys water permeability test, standard penetration test, and sampling for undisturbed sample during the course of the drilling investigation. The geophysical explorations were made not only along the top of dam but also transverse the heel of dam. The leakage of water installations can change the known-heterogeneous structure of the dam body but also cause streaming spontaneous (self) potential (SP) anomaly, which can be detected by electrical resistivity and SP measurements, respectively. For the interpretation of streaming SP, we used trial-and-error method by comparing synthetic SP data with field SP data for model update. For the computation, we first invert the resistivity data to obtain the distorted resistivity structure of the dam levee then make three-dimensional electrical-resistivity modeling for the streaming potential distribution of the dam levee. Our simulation algorithm of streaming SP distribution based on the integrated finite difference scheme computes two-dimensional (2D) SP distribution based on the distribution of calculated flow velocities of fluid for a given permeability structure together with physical properties. This permeability is repeatedly updated based on error between synthetic and field SP data, until the synthetic data match the field data. Through this trial-and-error-based SP interpretation, we locate the

  16. The long term response of stream flow to climatic warming in headwater streams of interior Alaska

    Science.gov (United States)

    Jeremy B. Jones; Amanda J. Rinehart

    2010-01-01

    Warming in the boreal forest of interior Alaska will have fundamental impacts on stream ecosystems through changes in stream hydrology resulting from upslope loss of permafrost, alteration of availability of soil moisture, and the distribution of vegetation. We examined stream flow in three headwater streams of the Caribou-Poker Creeks Research Watershed (CPCRW) in...

  17. The Stream-Catchment (StreamCat) Dataset: A database of watershed metrics for the conterminous USA

    Science.gov (United States)

    We developed an extensive database of landscape metrics for ~2.65 million streams, and their associated catchments, within the conterminous USA: The Stream-Catchment (StreamCat) Dataset. These data are publically available and greatly reduce the specialized geospatial expertise n...

  18. Stream processing health card application.

    Science.gov (United States)

    Polat, Seda; Gündem, Taflan Imre

    2012-10-01

    In this paper, we propose a data stream management system embedded to a smart card for handling and storing user specific summaries of streaming data coming from medical sensor measurements and/or other medical measurements. The data stream management system that we propose for a health card can handle the stream data rates of commonly known medical devices and sensors. It incorporates a type of context awareness feature that acts according to user specific information. The proposed system is cheap and provides security for private data by enhancing the capabilities of smart health cards. The stream data management system is tested on a real smart card using both synthetic and real data.

  19. An Expedient but Fascinating Geophysical Chimera: The Pinyon Flat Seismic Strain Point Array

    Science.gov (United States)

    Langston, C. A.

    2016-12-01

    The combination of a borehole Gladwin Tensor Strain Meter (GTSM) and a co-located three component broadband seismometer (BB) can theoretically be used to determine the propagation attributes of P-SV waves in vertically inhomogeneous media such as horizontal phase velocity and azimuth of propagation through application of wave gradiometry. A major requirement for this to be successful is to have well-calibrated strain and seismic sensors to be able to rely on using absolute wave amplitude from both systems. A "point" seismic array is constructed using the PBO GTSM station B084 and co-located BB seismic stations from an open array experiment deployed by UCSD as well as PFO station at the Pinyon Flat facility. Site amplitude statics for all three ground motion components are found for the 14-element (13 PY stations + PFO), small aperture seismic array using data from 47 teleseisms recorded from 2014 until present. Precision of amplitude measurement at each site is better than 0.2% for vertical components, 0.5% for EW components, and 1% for NS components. Relative amplitudes among sites of the array are often better than 1% attesting to the high quality of the instrumentation and installation. The wavefield and related horizontal strains are computed for the location of B084 using a second order Taylor's expansion of observed waveforms from moderate ( M4) regional events. The computed seismic array areal, differential, and shear strains show excellent correlation in both phase and amplitude with those recorded by B084 when using the calibration matrix previously determined using teleseismic strains from the entire ANZA seismic network. Use of the GTSM-BB "point" array significantly extends the bandwidth of gradiometry calculations over the small-aperture seismic array by nearly two orders of magnitude from 0.5 Hz to 0.01 Hz. In principle, a seismic strain point array could be constructed from every PBO GTSM with a co-located seismometer to help serve earthquake early

  20. Leaf litter processing in West Virginia mountain streams: effects of temperature and stream chemistry

    Science.gov (United States)

    Jacquelyn M. Rowe; William B. Perry; Sue A. Perry

    1996-01-01

    Climate change has the potential to alter detrital processing in headwater streams, which receive the majority of their nutrient input as terrestrial leaf litter. Early placement of experimental leaf packs in streams, one month prior to most abscission, was used as an experimental manipulation to increase stream temperature during leaf pack breakdown. We studied leaf...