WorldWideScience

Sample records for streaming algorithms extended

  1. Fast algorithm for automatically computing Strahler stream order

    Science.gov (United States)

    Lanfear, Kenneth J.

    1990-01-01

    An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.

  2. Data streams: algorithms and applications

    National Research Council Canada - National Science Library

    Muthukrishnan, S

    2005-01-01

    ... massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [175]. S. Muthukrishnan Rutgers University, New Brunswick, NJ, USA, muthu@cs...

  3. Stream Deniable-Encryption Algorithms

    Directory of Open Access Journals (Sweden)

    N.A. Moldovyan

    2016-04-01

    Full Text Available A method for stream deniable encryption of secret message is proposed, which is computationally indistinguishable from the probabilistic encryption of some fake message. The method uses generation of two key streams with some secure block cipher. One of the key streams is generated depending on the secret key and the other one is generated depending on the fake key. The key streams are mixed with the secret and fake data streams so that the output ciphertext looks like the ciphertext produced by some probabilistic encryption algorithm applied to the fake message, while using the fake key. When the receiver or/and sender of the ciphertext are coerced to open the encryption key and the source message, they open the fake key and the fake message. To disclose their lie the coercer should demonstrate possibility of the alternative decryption of the ciphertext, however this is a computationally hard problem.

  4. STREAMFINDER I: A New Algorithm for detecting Stellar Streams

    Science.gov (United States)

    Malhan, Khyati; Ibata, Rodrigo A.

    2018-04-01

    We have designed a powerful new algorithm to detect stellar streams in an automated and systematic way. The algorithm, which we call the STREAMFINDER, is well suited for finding dynamically cold and thin stream structures that may lie along any simple or complex orbits in Galactic stellar surveys containing any combination of positional and kinematic information. In the present contribution we introduce the algorithm, lay out the ideas behind it, explain the methodology adopted to detect streams and detail its workings by running it on a suite of simulations of mock Galactic survey data of similar quality to that expected from the ESA/Gaia mission. We show that our algorithm is able to detect even ultra-faint stream features lying well below previous detection limits. Tests show that our algorithm will be able to detect distant halo stream structures >10° long containing as few as ˜15 members (ΣG ˜ 33.6 mag arcsec-2) in the Gaia dataset.

  5. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    Science.gov (United States)

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  6. A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images

    CSIR Research Space (South Africa)

    Salmon

    2012-07-01

    Full Text Available stream_source_info Salmon1_2012_ABSTRACT ONLY.pdf.txt stream_content_type text/plain stream_size 1654 Content-Encoding ISO-8859-1 stream_name Salmon1_2012_ABSTRACT ONLY.pdf.txt Content-Type text/plain; charset=ISO-8859...-1 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22-27 July 2012 A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images yzB.P. Salmon, yz...

  7. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  8. Image Encryption Using a Lightweight Stream Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Saeed Bahrami

    2012-01-01

    Full Text Available Security of the multimedia data including image and video is one of the basic requirements for the telecommunications and computer networks. In this paper, we consider a simple and lightweight stream encryption algorithm for image encryption, and a series of tests are performed to confirm suitability of the described encryption algorithm. These tests include visual test, histogram analysis, information entropy, encryption quality, correlation analysis, differential analysis, and performance analysis. Based on this analysis, it can be concluded that the present algorithm in comparison to A5/1 and W7 stream ciphers has the same security level, is better in terms of the speed of performance, and is used for real-time applications.

  9. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  10. A Plagiarism Detection Algorithm based on Extended Winnowing

    Directory of Open Access Journals (Sweden)

    Duan Xuliang

    2017-01-01

    Full Text Available Plagiarism is a common problem faced by academia and education. Mature commercial plagiarism detection system has the advantages of comprehensive and high accuracy, but the expensive detection costs make it unsuitable for real-time, lightweight application environment such as the student assignments plagiarism detection. This paper introduces the method of extending classic Winnowing plagiarism detection algorithm, expands the algorithm in functionality. The extended algorithm can retain the text location and length information in original document while extracting the fingerprints of a document, so that the locating and marking for plagiarism text fragment are much easier to achieve. The experimental results and several years of running practice show that the expansion of the algorithm has little effect on its performance, normal hardware configuration of PC will be able to meet small and medium-sized applications requirements. Based on the characteristics of lightweight, high efficiency, reliability and flexibility of Winnowing, the extended algorithm further enhances the adaptability and extends the application areas.

  11. Comparison of Firefly algorithm and Artificial Immune System algorithm for lot streaming in -machine flow shop scheduling

    Directory of Open Access Journals (Sweden)

    G. Vijay Chakaravarthy

    2012-11-01

    Full Text Available Lot streaming is a technique used to split the processing of lots into several sublots (transfer batches to allow the overlapping of operations in a multistage manufacturing systems thereby shortening the production time (makespan. The objective of this paper is to minimize the makespan and total flow time of -job, -machine lot streaming problem in a flow shop with equal and variable size sublots and also to determine the optimal sublot size. In recent times researchers are concentrating and applying intelligent heuristics to solve flow shop problems with lot streaming. In this research, Firefly Algorithm (FA and Artificial Immune System (AIS algorithms are used to solve the problem. The results obtained by the proposed algorithms are also compared with the performance of other worked out traditional heuristics. The computational results shows that the identified algorithms are more efficient, effective and better than the algorithms already tested for this problem.

  12. DC Algorithm for Extended Robust Support Vector Machine.

    Science.gov (United States)

    Fujiwara, Shuhei; Takeda, Akiko; Kanamori, Takafumi

    2017-05-01

    Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.

  13. Introduction to stream: An Extensible Framework for Data Stream Clustering Research with R

    Directory of Open Access Journals (Sweden)

    Michael Hahsler

    2017-02-01

    Full Text Available In recent years, data streams have become an increasingly important area of research for the computer science, database and statistics communities. Data streams are ordered and potentially unbounded sequences of data points created by a typically non-stationary data generating process. Common data mining tasks associated with data streams include clustering, classification and frequent pattern mining. New algorithms for these types of data are proposed regularly and it is important to evaluate them thoroughly under standardized conditions. In this paper we introduce stream, a research tool that includes modeling and simulating data streams as well as an extensible framework for implementing, interfacing and experimenting with algorithms for various data stream mining tasks. The main advantage of stream is that it seamlessly integrates with the large existing infrastructure provided by R. In addition to data handling, plotting and easy scripting capabilities, R also provides many existing algorithms and enables users to interface code written in many programming languages popular among data mining researchers (e.g., C/C++, Java and Python. In this paper we describe the architecture of stream and focus on its use for data stream clustering research. stream was implemented with extensibility in mind and will be extended in the future to cover additional data stream mining tasks like classification and frequent pattern mining.

  14. A fast density-based clustering algorithm for real-time Internet of Things stream.

    Science.gov (United States)

    Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.

  15. Symmetric Stream Cipher using Triple Transposition Key Method and Base64 Algorithm for Security Improvement

    Science.gov (United States)

    Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur

    2017-12-01

    Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.

  16. A Fast Density-Based Clustering Algorithm for Real-Time Internet of Things Stream

    Science.gov (United States)

    Ying Wah, Teh

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753

  17. A Streaming Algorithm for Online Estimation of Temporal and Spatial Extent of Delays

    Directory of Open Access Journals (Sweden)

    Kittipong Hiriotappa

    2017-01-01

    Full Text Available Knowing traffic congestion and its impact on travel time in advance is vital for proactive travel planning as well as advanced traffic management. This paper proposes a streaming algorithm to estimate temporal and spatial extent of delays online which can be deployed with roadside sensors. First, the proposed algorithm uses streaming input from individual sensors to detect a deviation from normal traffic patterns, referred to as anomalies, which is used as an early indication of delay occurrence. Then, a group of consecutive sensors that detect anomalies are used to temporally and spatially estimate extent of delay associated with the detected anomalies. Performance evaluations are conducted using a real-world data set collected by roadside sensors in Bangkok, Thailand, and the NGSIM data set collected in California, USA. Using NGSIM data, it is shown qualitatively that the proposed algorithm can detect consecutive occurrences of shockwaves and estimate their associated delays. Then, using a data set from Thailand, it is shown quantitatively that the proposed algorithm can detect and estimate delays associated with both recurring congestion and incident-induced nonrecurring congestion. The proposed algorithm also outperforms the previously proposed streaming algorithm.

  18. A backtracking algorithm for the stream AND-parallel execution of logic programs

    Energy Technology Data Exchange (ETDEWEB)

    Somogyi, Z.; Ramamohanarao, K.; Vaghani, J. (Univ. of Melbourne, Parkville (Australia))

    1988-06-01

    The authors present the first backtracking algorithm for stream AND-parallel logic programs. It relies on compile-time knowledge of the data flow graph of each clause to let it figure out efficiently which goals to kill or restart when a goal fails. This crucial information, which they derive from mode declarations, was not available at compile-time in any previous stream AND-parallel system. They show that modes can increase the precision of the backtracking algorithm, though their algorithm allows this precision to be traded off against overhead on a procedure-by-procedure and call-by-call basis. The modes also allow their algorithm to handle efficiently programs that manipulate partially instantiated data structures and an important class of programs with circular dependency graphs. On code that does not need backtracking, the efficiency of their algorithm approaches that of the committed-choice languages; on code that does need backtracking its overhead is comparable to that of the independent AND-parallel backtracking algorithms.

  19. HYBRID CHRIPTOGRAPHY STREAM CIPHER AND RSA ALGORITHM WITH DIGITAL SIGNATURE AS A KEY

    Directory of Open Access Journals (Sweden)

    Grace Lamudur Arta Sihombing

    2017-03-01

    Full Text Available Confidentiality of data is very important in communication. Many cyber crimes that exploit security holes for entry and manipulation. To ensure the security and confidentiality of the data, required a certain technique to encrypt data or information called cryptography. It is one of the components that can not be ignored in building security. And this research aimed to analyze the hybrid cryptography with symmetric key by using a stream cipher algorithm and asymmetric key by using RSA (Rivest Shamir Adleman algorithm. The advantages of hybrid cryptography is the speed in processing data using a symmetric algorithm and easy transfer of key using asymmetric algorithm. This can increase the speed of transaction processing data. Stream Cipher Algorithm using the image digital signature as a keys, that will be secured by the RSA algorithm. So, the key for encryption and decryption are different. Blum Blum Shub methods used to generate keys for the value p, q on the RSA algorithm. It will be very difficult for a cryptanalyst to break the key. Analysis of hybrid cryptography stream cipher and RSA algorithms with digital signatures as a key, indicates that the size of the encrypted file is equal to the size of the plaintext, not to be larger or smaller so that the time required for encryption and decryption process is relatively fast.

  20. A real time sorting algorithm to time sort any deterministic time disordered data stream

    Science.gov (United States)

    Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.

    2017-12-01

    In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.

  1. Transmission Algorithm with QoS Considerations for a Sustainable MPEG Streaming Service

    Directory of Open Access Journals (Sweden)

    Sang-Hyong Kim

    2017-03-01

    Full Text Available With the proliferation of heterogeneous networks, there is a need to provide multimedia stream services in a sustainable manner. It is especially critical to maintain the Quality of Service (QoS standards. Existing multimedia streaming services have been studied to guarantee QoS on the receiving side. QoS has not been ensured due to the fact that the loss of streaming data to be transmitted has not been considered in network conditions. With an algorithm that considers the QoS and can reduce the overhead of the network, it will be possible to reduce the transmission error and wastage of communication network resources. In this paper, we propose a scheme that improves the reliability of multimedia transmissions by using an adaptive algorithm that switches between UDP (User Datagram Protocol and TCP (Transmission Control Protocol based on the size of the data. In addition, we present a method that retransmits essential portions of the multimedia data, thus improving transmission efficiency. We simulate an MPEG (Moving Picture Experts Group stream service and evaluate the performance of the proposed adaptive MPEG stream service.

  2. Using internal evaluation measures to validate the quality of diverse stream clustering algorithms

    NARCIS (Netherlands)

    Hassani, M.; Seidl, T.

    2017-01-01

    Measuring the quality of a clustering algorithm has shown to be as important as the algorithm itself. It is a crucial part of choosing the clustering algorithm that performs best for an input data. Streaming input data have many features that make them much more challenging than static ones. They

  3. Extended-Maxima Transform Watershed Segmentation Algorithm for Touching Corn Kernels

    Directory of Open Access Journals (Sweden)

    Yibo Qin

    2013-01-01

    Full Text Available Touching corn kernels are usually oversegmented by the traditional watershed algorithm. This paper proposes a modified watershed segmentation algorithm based on the extended-maxima transform. Firstly, a distance-transformed image is processed by the extended-maxima transform in the range of the optimized threshold value. Secondly, the binary image obtained by the preceding process is run through the watershed segmentation algorithm, and watershed ridge lines are superimposed on the original image, so that touching corn kernels are separated into segments. Fifty images which all contain 400 corn kernels were tested. Experimental results showed that the effect of segmentation is satisfactory by the improved algorithm, and the accuracy of segmentation is as high as 99.87%.

  4. Parallel field line and stream line tracing algorithms for space physics applications

    Science.gov (United States)

    Toth, G.; de Zeeuw, D.; Monostori, G.

    2004-05-01

    Field line and stream line tracing is required in various space physics applications, such as the coupling of the global magnetosphere and inner magnetosphere models, the coupling of the solar energetic particle and heliosphere models, or the modeling of comets, where the multispecies chemical equations are solved along stream lines of a steady state solution obtained with single fluid MHD model. Tracing a vector field is an inherently serial process, which is difficult to parallelize. This is especially true when the data corresponding to the vector field is distributed over a large number of processors. We designed algorithms for the various applications, which scale well to a large number of processors. In the first algorithm the computational domain is divided into blocks. Each block is on a single processor. The algorithm folows the vector field inside the blocks, and calculates a mapping of the block surfaces. The blocks communicate the values at the coinciding surfaces, and the results are interpolated. Finally all block surfaces are defined and values inside the blocks are obtained. In the second algorithm all processors start integrating along the vector field inside the accessible volume. When the field line leaves the local subdomain, the position and other information is stored in a buffer. Periodically the processors exchange the buffers, and continue integration of the field lines until they reach a boundary. At that point the results are sent back to the originating processor. Efficiency is achieved by a careful phasing of computation and communication. In the third algorithm the results of a steady state simulation are stored on a hard drive. The vector field is contained in blocks. All processors read in all the grid and vector field data and the stream lines are integrated in parallel. If a stream line enters a block, which has already been integrated, the results can be interpolated. By a clever ordering of the blocks the execution speed can be

  5. On the Organization of Parallel Operation of Some Algorithms for Finding the Shortest Path on a Graph on a Computer System with Multiple Instruction Stream and Single Data Stream

    Directory of Open Access Journals (Sweden)

    V. E. Podol'skii

    2015-01-01

    Full Text Available The paper considers the implementing Bellman-Ford and Lee algorithms to find the shortest graph path on a computer system with multiple instruction stream and single data stream (MISD. The MISD computer is a computer that executes commands of arithmetic-logical processing (on the CPU and commands of structures processing (on the structures processor in parallel on a single data stream. Transformation of sequential programs into the MISD programs is a labor intensity process because it requires a stream of the arithmetic-logical processing to be manually separated from that of the structures processing. Algorithms based on the processing of data structures (e.g., algorithms on graphs show high performance on a MISD computer. Bellman-Ford and Lee algorithms for finding the shortest path on a graph are representatives of these algorithms. They are applied to robotics for automatic planning of the robot movement in-situ. Modification of Bellman-Ford and Lee algorithms for finding the shortest graph path in coprocessor MISD mode and the parallel MISD modification of these algorithms were first obtained in this article. Thus, this article continues a series of studies on the transformation of sequential algorithms into MISD ones (Dijkstra and Ford-Fulkerson 's algorithms and has a pronouncedly applied nature. The article also presents the analysis results of Bellman-Ford and Lee algorithms in MISD mode. The paper formulates the basic trends of a technique for parallelization of algorithms into arithmetic-logical processing stream and structures processing stream. Among the key areas for future research, development of the mathematical approach to provide a subsequently formalized and automated process of parallelizing sequential algorithms between the CPU and structures processor is highlighted. Among the mathematical models that can be used in future studies there are graph models of algorithms (e.g., dependency graph of a program. Due to the high

  6. Research and Application on Fractional-Order Darwinian PSO Based Adaptive Extended Kalman Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    Qiguang Zhu

    2014-05-01

    Full Text Available To resolve the difficulty in establishing accurate priori noise model for the extended Kalman filtering algorithm, propose the fractional-order Darwinian particle swarm optimization (PSO algorithm has been proposed and introduced into the fuzzy adaptive extended Kalman filtering algorithm. The natural selection method has been adopted to improve the standard particle swarm optimization algorithm, which enhanced the diversity of particles and avoided the premature. In addition, the fractional calculus has been used to improve the evolution speed of particles. The PSO algorithm after improved has been applied to train fuzzy adaptive extended Kalman filter and achieve the simultaneous localization and mapping. The simulation results have shown that compared with the geese particle swarm optimization training of fuzzy adaptive extended Kalman filter localization and mapping algorithm, has been greatly improved in terms of localization and mapping.

  7. Background Traffic-Based Retransmission Algorithm for Multimedia Streaming Transfer over Concurrent Multipaths

    Directory of Open Access Journals (Sweden)

    Yuanlong Cao

    2012-01-01

    Full Text Available The content-rich multimedia streaming will be the most attractive services in the next-generation networks. With function of distribute data across multipath end-to-end paths based on SCTP's multihoming feature, concurrent multipath transfer SCTP (CMT-SCTP has been regarded as the most promising technology for the efficient multimedia streaming transmission. However, the current researches on CMT-SCTP mainly focus on the algorithms related to the data delivery performance while they seldom consider the background traffic factors. Actually, background traffic of realistic network environments has an important impact on the performance of CMT-SCTP. In this paper, we firstly investigate the effect of background traffic on the performance of CMT-SCTP based on a close realistic simulation topology with reasonable background traffic in NS2, and then based on the localness nature of background flow, a further improved retransmission algorithm, named RTX_CSI, is proposed to reach more benefits in terms of average throughput and achieve high users' experience of quality for multimedia streaming services.

  8. An Approximate L p Difference Algorithm for Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Jessica H. Fong

    2001-12-01

    Full Text Available Several recent papers have shown how to approximate the difference ∑ i |a i-b i | or ∑|a i-b i | 2 between two functions, when the function values a i and b i are given in a data stream, and their order is chosen by an adversary. These algorithms use little space (much less than would be needed to store the entire stream and little time to process each item in the stream. They approximate with small relative error. Using different techniques, we show how to approximate the L p-difference ∑ i |a i-b i | p for any rational-valued p∈(0,2], with comparable efficiency and error. We also show how to approximate ∑ i |a i-b i | p for larger values of p but with a worse error guarantee. Our results fill in gaps left by recent work, by providing an algorithm that is precisely tunable for the application at hand. These results can be used to assess the difference between two chronologically or physically separated massive data sets, making one quick pass over each data set, without buffering the data or requiring the data source to pause. For example, one can use our techniques to judge whether the traffic on two remote network routers are similar without requiring either router to transmit a copy of its traffic. A web search engine could use such algorithms to construct a library of small ``sketches,'' one for each distinct page on the web; one can approximate the extent to which new web pages duplicate old ones by comparing the sketches of the web pages. Such techniques will become increasingly important as the enormous scale, distributional nature, and one-pass processing requirements of data sets become more commonplace.

  9. A note on extending decision algorithms by stable predicates

    Directory of Open Access Journals (Sweden)

    Alfredo Ferro

    1988-11-01

    Full Text Available A general mechanism to extend decision algorithms to deal with additional predicates is described. The only conditions imposed on the predicates is stability with respect to some transitive relations.

  10. A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images

    CSIR Research Space (South Africa)

    Salmon, BP

    2012-07-01

    Full Text Available stream_source_info Salmon2_2012.pdf.txt stream_content_type text/plain stream_size 16400 Content-Encoding ISO-8859-1 stream_name Salmon2_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 A SEARCH ALGORITHM TO META... the spectral bands separately and introduced a meta-optimization method for the EKF that will be called the Bias Variance Equilibrium Point (BVEP) in this paper. The objective of this paper is to introduce an unsuper- vised search algorithm called the Bias...

  11. Extending the benchmark simulation model no2 with processes for nitrous oxide production and side-stream nitrogen removal

    DEFF Research Database (Denmark)

    Boiocchi, Riccardo; Sin, Gürkan; Gernaey, Krist V.

    2015-01-01

    In this work the Benchmark Simulation Model No.2 is extended with processes for nitrous oxide production and for side-stream partial nitritation/Anammox (PN/A) treatment. For these extensions the Activated Sludge Model for Greenhouse gases No.1 was used to describe the main waterline, whereas...... the Complete Autotrophic Nitrogen Removal (CANR) model was used to describe the side-stream (PN/A) treatment. Comprehensive simulations were performed to assess the extended model. Steady-state simulation results revealed the following: (i) the implementation of a continuous CANR side-stream reactor has...... increased the total nitrogen removal by 10%; (ii) reduced the aeration demand by 16% compared to the base case, and (iii) the activity of ammonia-oxidizing bacteria is most influencing nitrous oxide emissions. The extended model provides a simulation platform to generate, test and compare novel control...

  12. Extending Counter-streaming Motion from an Active Region Filament to a Sunspot Light Bridge

    Science.gov (United States)

    Wang, Haimin; Liu, Rui; Li, Qin; Liu, Chang; Deng, Na; Xu, Yan; Jing, Ju; Wang, Yuming; Cao, Wenda

    2018-01-01

    We analyze high-resolution observations from the 1.6 m telescope at Big Bear Solar Observatory that cover an active region filament. Counter-streaming motions are clearly observed in the filament. The northern end of the counter-streaming motions extends to a light bridge, forming a spectacular circulation pattern around a sunspot, with clockwise motion in the blue wing and counterclockwise motion in the red wing, as observed in the Hα off-bands. The apparent speed of the flow is around 10–60 km s‑1 in the filament, decreasing to 5–20 km s‑1 in the light bridge. The most intriguing results are the magnetic structure and the counter-streaming motions in the light bridge. Similar to those in the filament, the magnetic fields show a dominant transverse component in the light bridge. However, the filament is located between opposed magnetic polarities, while the light bridge is between strong fields of the same polarity. We analyze the power of oscillations with the image sequences of constructed Dopplergrams, and find that the filament’s counter-streaming motion is due to physical mass motion along fibrils, while the light bridge’s counter-streaming motion is due to oscillation in the direction along the line-of-sight. The oscillation power peaks around 4 minutes. However, the section of the light bridge next to the filament also contains a component of the extension of the filament in combination with the oscillation, indicating that some strands of the filament are extended to and rooted in that part of the light bridge.

  13. Extended SVM algorithms for multilevel trans-Z-source inverter

    Directory of Open Access Journals (Sweden)

    Aida Baghbany Oskouei

    2016-03-01

    Full Text Available This paper suggests extended algorithms for multilevel trans-Z-source inverter. These algorithms are based on space vector modulation (SVM, which works with high switching frequency and does not generate the mean value of the desired load voltage in every switching interval. In this topology the output voltage is not limited to dc voltage source similar to traditional cascaded multilevel inverter and can be increased with trans-Z-network shoot-through state control. Besides, it is more reliable against short circuit, and due to several number of dc sources in each phase of this topology, it is possible to use it in hybrid renewable energy. Proposed SVM algorithms include the following: Combined modulation algorithm (SVPWM and shoot-through implementation in dwell times of voltage vectors algorithm. These algorithms are compared from viewpoint of simplicity, accuracy, number of switching, and THD. Simulation and experimental results are presented to demonstrate the expected representations.

  14. A New Algorithm for Cartographic Simplification of Streams and Lakes Using Deviation Angles and Error Bands

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-10-01

    Full Text Available Multi-representation databases (MRDBs are used in several geographical information system applications for different purposes. MRDBs are mainly obtained through model and cartographic generalizations. Simplification is the essential operator of cartographic generalization, and streams and lakes are essential features in hydrography. In this study, a new algorithm was developed for the simplification of streams and lakes. In this algorithm, deviation angles and error bands are used to determine the characteristic vertices and the planimetric accuracy of the features, respectively. The algorithm was tested using a high-resolution national hydrography dataset of Pomme de Terre, a sub-basin in the USA. To assess the performance of the new algorithm, the Bend Simplify and Douglas-Peucker algorithms, the medium-resolution hydrography dataset of the sub-basin, and Töpfer’s radical law were used. For quantitative analysis, the vertex numbers, the lengths, and the sinuosity values were computed. Consequently, it was shown that the new algorithm was able to meet the main requirements (i.e., accuracy, legibility and aesthetics, and storage.

  15. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    Science.gov (United States)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

  16. STEGANOGRAPHY FOR TWO AND THREE LSBs USING EXTENDED SUBSTITUTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    R.S. Gutte

    2013-03-01

    Full Text Available The Security of data on internet has become a prior thing. Though any message is encrypted using a stronger cryptography algorithm, it cannot avoid the suspicion of intruder. This paper proposes an approach in such way that, data is encrypted using Extended Substitution Algorithm and then this cipher text is concealed at two or three LSB positions of the carrier image. This algorithm covers almost all type of symbols and alphabets. The encrypted text is concealed variably into the LSBs. Therefore, it is a stronger approach. The visible characteristics of the carrier image before and after concealment remained almost the same. The algorithm has been implemented using Matlab.

  17. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  18. Streaming Weak Submodularity: Interpreting Neural Networks on the Fly

    OpenAIRE

    Elenberg, Ethan R.; Dimakis, Alexandros G.; Feldman, Moran; Karbasi, Amin

    2017-01-01

    In many machine learning applications, it is important to explain the predictions of a black-box classifier. For example, why does a deep neural network assign an image to a particular class? We cast interpretability of black-box classifiers as a combinatorial maximization problem and propose an efficient streaming algorithm to solve it subject to cardinality constraints. By extending ideas from Badanidiyuru et al. [2014], we provide a constant factor approximation guarantee for our algorithm...

  19. Face recognition algorithm using extended vector quantization histogram features.

    Science.gov (United States)

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  20. Gesture Recognition from Data Streams of Human Motion Sensor Using Accelerated PSO Swarm Search Feature Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2015-01-01

    Full Text Available Human motion sensing technology gains tremendous popularity nowadays with practical applications such as video surveillance for security, hand signing, and smart-home and gaming. These applications capture human motions in real-time from video sensors, the data patterns are nonstationary and ever changing. While the hardware technology of such motion sensing devices as well as their data collection process become relatively mature, the computational challenge lies in the real-time analysis of these live feeds. In this paper we argue that traditional data mining methods run short of accurately analyzing the human activity patterns from the sensor data stream. The shortcoming is due to the algorithmic design which is not adaptive to the dynamic changes in the dynamic gesture motions. The successor of these algorithms which is known as data stream mining is evaluated versus traditional data mining, through a case of gesture recognition over motion data by using Microsoft Kinect sensors. Three different subjects were asked to read three comic strips and to tell the stories in front of the sensor. The data stream contains coordinates of articulation points and various positions of the parts of the human body corresponding to the actions that the user performs. In particular, a novel technique of feature selection using swarm search and accelerated PSO is proposed for enabling fast preprocessing for inducing an improved classification model in real-time. Superior result is shown in the experiment that runs on this empirical data stream. The contribution of this paper is on a comparative study between using traditional and data stream mining algorithms and incorporation of the novel improved feature selection technique with a scenario where different gesture patterns are to be recognized from streaming sensor data.

  1. Interpolation algorithm for asynchronous ADC-data

    Directory of Open Access Journals (Sweden)

    S. Bramburger

    2017-09-01

    Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  2. Fast Adapting Ensemble: A New Algorithm for Mining Data Streams with Concept Drift

    Science.gov (United States)

    Ortíz Díaz, Agustín; Ramos-Jiménez, Gonzalo; Frías Blanco, Isvani; Caballero Mota, Yailé; Morales-Bueno, Rafael

    2015-01-01

    The treatment of large data streams in the presence of concept drifts is one of the main challenges in the field of data mining, particularly when the algorithms have to deal with concepts that disappear and then reappear. This paper presents a new algorithm, called Fast Adapting Ensemble (FAE), which adapts very quickly to both abrupt and gradual concept drifts, and has been specifically designed to deal with recurring concepts. FAE processes the learning examples in blocks of the same size, but it does not have to wait for the batch to be complete in order to adapt its base classification mechanism. FAE incorporates a drift detector to improve the handling of abrupt concept drifts and stores a set of inactive classifiers that represent old concepts, which are activated very quickly when these concepts reappear. We compare our new algorithm with various well-known learning algorithms, taking into account, common benchmark datasets. The experiments show promising results from the proposed algorithm (regarding accuracy and runtime), handling different types of concept drifts. PMID:25879051

  3. StreamQRE: Modular Specification and Efficient Evaluation of Quantitative Queries over Streaming Data.

    Science.gov (United States)

    Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G; Khanna, Sanjeev

    2017-06-01

    Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.

  4. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    Science.gov (United States)

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  5. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  6. RStorm: Developing and Testing Streaming Algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  7. RStorm : Developing and testing streaming algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  8. A Hybrid Ant Colony Optimization Algorithm for the Extended Capacitated Arc Routing Problem.

    Science.gov (United States)

    Li-Ning Xing; Rohlfshagen, P; Ying-Wu Chen; Xin Yao

    2011-08-01

    The capacitated arc routing problem (CARP) is representative of numerous practical applications, and in order to widen its scope, we consider an extended version of this problem that entails both total service time and fixed investment costs. We subsequently propose a hybrid ant colony optimization (ACO) algorithm (HACOA) to solve instances of the extended CARP. This approach is characterized by the exploitation of heuristic information, adaptive parameters, and local optimization techniques: Two kinds of heuristic information, arc cluster information and arc priority information, are obtained continuously from the solutions sampled to guide the subsequent optimization process. The adaptive parameters ease the burden of choosing initial values and facilitate improved and more robust results. Finally, local optimization, based on the two-opt heuristic, is employed to improve the overall performance of the proposed algorithm. The resulting HACOA is tested on four sets of benchmark problems containing a total of 87 instances with up to 140 nodes and 380 arcs. In order to evaluate the effectiveness of the proposed method, some existing capacitated arc routing heuristics are extended to cope with the extended version of this problem; the experimental results indicate that the proposed ACO method outperforms these heuristics.

  9. Pattern Discovery and Change Detection of Online Music Query Streams

    Science.gov (United States)

    Li, Hua-Fu

    In this paper, an efficient stream mining algorithm, called FTP-stream (Frequent Temporal Pattern mining of streams), is proposed to find the frequent temporal patterns over melody sequence streams. In the framework of our proposed algorithm, an effective bit-sequence representation is used to reduce the time and memory needed to slide the windows. The FTP-stream algorithm can calculate the support threshold in only a single pass based on the concept of bit-sequence representation. It takes the advantage of "left" and "and" operations of the representation. Experiments show that the proposed algorithm only scans the music query stream once, and runs significant faster and consumes less memory than existing algorithms, such as SWFI-stream and Moment.

  10. Pilot-Streaming: Design Considerations for a Stream Processing Framework for High-Performance Computing

    OpenAIRE

    Andre Luckow; Peter Kasson; Shantenu Jha

    2016-01-01

    This White Paper (submitted to STREAM 2016) identifies an approach to integrate streaming data with HPC resources. The paper outlines the design of Pilot-Streaming, which extends the concept of Pilot-abstraction to streaming real-time data.

  11. Stream Clustering of Growing Objects

    Science.gov (United States)

    Siddiqui, Zaigham Faraz; Spiliopoulou, Myra

    We study incremental clustering of objects that grow and accumulate over time. The objects come from a multi-table stream e.g. streams of Customer and Transaction. As the Transactions stream accumulates, the Customers’ profiles grow. First, we use an incremental propositionalisation to convert the multi-table stream into a single-table stream upon which we apply clustering. For this purpose, we develop an online version of K-Means algorithm that can handle these swelling objects and any new objects that arrive. The algorithm also monitors the quality of the model and performs re-clustering when it deteriorates. We evaluate our method on the PKDD Challenge 1999 dataset.

  12. Online feature selection with streaming features.

    Science.gov (United States)

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  13. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  14. Productivity of Stream Definitions

    NARCIS (Netherlands)

    Endrullis, Jörg; Grabmayer, Clemens; Hendriks, Dimitri; Isihara, Ariya; Klop, Jan

    2007-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continuously in such a way that a uniquely determined stream is obtained as the limit. Whereas productivity is undecidable

  15. Productivity of stream definitions

    NARCIS (Netherlands)

    Endrullis, J.; Grabmayer, C.A.; Hendriks, D.; Isihara, A.; Klop, J.W.

    2008-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continually in such a way that a uniquely determined stream in constructor normal form is obtained as the limit. Whereas

  16. A Novel Image Stream Cipher Based On Dynamic Substitution

    OpenAIRE

    Elsharkawi, A.; El-Sagheer, R. M.; Akah, H.; Taha, H.

    2016-01-01

    Recently, many chaos-based stream cipher algorithms have been developed. Traditional chaos stream cipher is based on XORing a generated secure random number sequence based on chaotic maps (e.g. logistic map, Bernoulli Map, Tent Map etc.) with the original image to get the encrypted image, This type of stream cipher seems to be vulnerable to chosen plaintext attacks. This paper introduces a new stream cipher algorithm based on dynamic substitution box. The new algorithm uses one substitution b...

  17. Low-cost attitude determination system using an extended Kalman filter (EKF) algorithm

    Science.gov (United States)

    Esteves, Fernando M.; Nehmetallah, Georges; Abot, Jandro L.

    2016-05-01

    Attitude determination is one of the most important subsystems in spacecraft, satellite, or scientific balloon mission s, since it can be combined with actuators to provide rate stabilization and pointing accuracy for payloads. In this paper, a low-cost attitude determination system with a precision in the order of arc-seconds that uses low-cost commercial sensors is presented including a set of uncorrelated MEMS gyroscopes, two clinometers, and a magnetometer in a hierarchical manner. The faster and less precise sensors are updated by the slower, but more precise ones through an Extended Kalman Filter (EKF)-based data fusion algorithm. A revision of the EKF algorithm fundamentals and its implementation to the current application, are presented along with an analysis of sensors noise. Finally, the results from the data fusion algorithm implementation are discussed in detail.

  18. Interactive collision detection for deformable models using streaming AABBs.

    Science.gov (United States)

    Zhang, Xinyu; Kim, Young J

    2007-01-01

    We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB

  19. An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.

    Science.gov (United States)

    Li, Jian; Wei, Xinguo; Zhang, Guangjun

    2017-08-21

    Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.

  20. Amoeba-inspired Tug-of-War algorithms for exploration-exploitation dilemma in extended Bandit Problem.

    Science.gov (United States)

    Aono, Masashi; Kim, Song-Ju; Hara, Masahiko; Munakata, Toshinori

    2014-03-01

    The true slime mold Physarum polycephalum, a single-celled amoeboid organism, is capable of efficiently allocating a constant amount of intracellular resource to its pseudopod-like branches that best fit the environment where dynamic light stimuli are applied. Inspired by the resource allocation process, the authors formulated a concurrent search algorithm, called the Tug-of-War (TOW) model, for maximizing the profit in the multi-armed Bandit Problem (BP). A player (gambler) of the BP should decide as quickly and accurately as possible which slot machine to invest in out of the N machines and faces an "exploration-exploitation dilemma." The dilemma is a trade-off between the speed and accuracy of the decision making that are conflicted objectives. The TOW model maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a nonlocal correlation among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). Owing to this nonlocal correlation, the TOW model can efficiently manage the dilemma. In this study, we extend the TOW model to apply it to a stretched variant of BP, the Extended Bandit Problem (EBP), which is a problem of selecting the best M-tuple of the N machines. We demonstrate that the extended TOW model exhibits better performances for 2-tuple-3-machine and 2-tuple-4-machine instances of EBP compared with the extended versions of well-known algorithms for BP, the ϵ-Greedy and SoftMax algorithms, particularly in terms of its short-term decision-making capability that is essential for the survival of the amoeba in a hostile environment. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas

    Science.gov (United States)

    Harabi, F.; Akkar, S.; Gharsallah, A.

    2016-07-01

    Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.

  2. An extended Intelligent Water Drops algorithm for workflow scheduling in cloud computing environment

    Directory of Open Access Journals (Sweden)

    Shaymaa Elsherbiny

    2018-03-01

    Full Text Available Cloud computing is emerging as a high performance computing environment with a large scale, heterogeneous collection of autonomous systems and flexible computational architecture. Many resource management methods may enhance the efficiency of the whole cloud computing system. The key part of cloud computing resource management is resource scheduling. Optimized scheduling of tasks on the cloud virtual machines is an NP-hard problem and many algorithms have been presented to solve it. The variations among these schedulers are due to the fact that the scheduling strategies of the schedulers are adapted to the changing environment and the types of tasks. The focus of this paper is on workflows scheduling in cloud computing, which is gaining a lot of attention recently because workflows have emerged as a paradigm to represent complex computing problems. We proposed a novel algorithm extending the natural-based Intelligent Water Drops (IWD algorithm that optimizes the scheduling of workflows on the cloud. The proposed algorithm is implemented and embedded within the workflows simulation toolkit and tested in different simulated cloud environments with different cost models. Our algorithm showed noticeable enhancements over the classical workflow scheduling algorithms. We made a comparison between the proposed IWD-based algorithm with other well-known scheduling algorithms, including MIN-MIN, MAX-MIN, Round Robin, FCFS, and MCT, PSO and C-PSO, where the proposed algorithm presented noticeable enhancements in the performance and cost in most situations.

  3. An extended five-stream model for diffusion of ion-implanted dopants in monocrystalline silicon

    International Nuclear Information System (INIS)

    Khina, B.B.

    2007-01-01

    Low-energy high-dose ion implantation of different dopants (P, Sb, As, B and others) into monocrystalline silicon with subsequent thermal annealing is used for the formation of ultra-shallow p-n junctions in modern VLSI circuit technology. During annealing, dopant activation and diffusion in silicon takes place. The experimentally observed phenomenon of transient enhanced diffusion (TED), which is typically ascribed to the interaction of diffusing species with non-equilibrium point defects accumulated in silicon due to ion damage, and formation of small clusters and extended defects, hinders further down scaling of p-n junctions in VLSI circuits. TED is currently a subject of extensive experimental and theoretical investigation in many binary and multicomponent systems. However, the state-of-the-art mathematical models of dopant diffusion, which are based on the so-called 'five-stream' approach, and modern TCAD software packages such as SUPREM-4 (by Silvaco Data Systems, Ltd.) that implement these models encounter severe difficulties in describing TED. Solving the intricate problem of TED suppression and development of novel regimes of ion implantation and rapid thermal annealing is impossible without elaboration of new mathematical models and computer simulation of this complex phenomenon. In this work, an extended five-stream model for diffusion in silicon is developed which takes into account all possible charge states of point defects (vacancies and silicon self-interstitials) and diffusing pairs 'dopant atom-vacancy' and 'dopant atom-silicon self-interstitial'. The model includes the drift terms for differently charged point defects and pairs in the internal electric field and the kinetics of interaction between unlike 'species' (generation and annihilation of pairs and annihilation of point defects). Expressions for diffusion coefficients and numerous sink/source terms that appear in the non-linear, non-steady-state reaction-diffusion equations are derived

  4. Use of NTRIP for Optimizing the Decoding Algorithm for Real-Time Data Streams

    Directory of Open Access Journals (Sweden)

    Zhanke He

    2014-10-01

    Full Text Available As a network transmission protocol, Networked Transport of RTCM via Internet Protocol (NTRIP is widely used in GPS and Global Orbiting Navigational Satellite System (GLONASS Augmentation systems, such as Continuous Operational Reference System (CORS, Wide Area Augmentation System (WAAS and Satellite Based Augmentation Systems (SBAS. With the deployment of BeiDou Navigation Satellite system(BDS to serve the Asia-Pacific region, there are increasing needs for ground monitoring of the BeiDou Navigation Satellite system and the development of the high-precision real-time BeiDou products. This paper aims to optimize the decoding algorithm of NTRIP Client data streams and the user authentication strategies of the NTRIP Caster based on NTRIP. The proposed method greatly enhances the handling efficiency and significantly reduces the data transmission delay compared with the Federal Agency for Cartography and Geodesy (BKG NTRIP. Meanwhile, a transcoding method is proposed to facilitate the data transformation from the BINary EXchange (BINEX format to the RTCM format. The transformation scheme thus solves the problem of handing real-time data streams from Trimble receivers in the BeiDou Navigation Satellite System indigenously developed by China.

  5. Use of NTRIP for optimizing the decoding algorithm for real-time data streams.

    Science.gov (United States)

    He, Zhanke; Tang, Wenda; Yang, Xuhai; Wang, Liming; Liu, Jihua

    2014-10-10

    As a network transmission protocol, Networked Transport of RTCM via Internet Protocol (NTRIP) is widely used in GPS and Global Orbiting Navigational Satellite System (GLONASS) Augmentation systems, such as Continuous Operational Reference System (CORS), Wide Area Augmentation System (WAAS) and Satellite Based Augmentation Systems (SBAS). With the deployment of BeiDou Navigation Satellite system(BDS) to serve the Asia-Pacific region, there are increasing needs for ground monitoring of the BeiDou Navigation Satellite system and the development of the high-precision real-time BeiDou products. This paper aims to optimize the decoding algorithm of NTRIP Client data streams and the user authentication strategies of the NTRIP Caster based on NTRIP. The proposed method greatly enhances the handling efficiency and significantly reduces the data transmission delay compared with the Federal Agency for Cartography and Geodesy (BKG) NTRIP. Meanwhile, a transcoding method is proposed to facilitate the data transformation from the BINary EXchange (BINEX) format to the RTCM format. The transformation scheme thus solves the problem of handing real-time data streams from Trimble receivers in the BeiDou Navigation Satellite System indigenously developed by China.

  6. Extended great deluge algorithm for the imperfect preventive maintenance optimization of multi-state systems

    International Nuclear Information System (INIS)

    Nahas, Nabil; Khatab, Abdelhakim; Ait-Kadi, Daoud; Nourelfath, Mustapha

    2008-01-01

    This paper deals with preventive maintenance optimization problem for multi-state systems (MSS). This problem was initially addressed and solved by Levitin and Lisnianski [Optimization of imperfect preventive maintenance for multi-state systems. Reliab Eng Syst Saf 2000;67:193-203]. It consists on finding an optimal sequence of maintenance actions which minimizes maintenance cost while providing the desired system reliability level. This paper proposes an approach which improves the results obtained by genetic algorithm (GENITOR) in Levitin and Lisnianski [Optimization of imperfect preventive maintenance for multi-state systems. Reliab Eng Syst Saf 2000;67:193-203]. The considered MSS have a range of performance levels and their reliability is defined to be the ability to meet a given demand. This reliability is evaluated by using the universal generating function technique. An optimization method based on the extended great deluge algorithm is proposed. This method has the advantage over other methods to be simple and requires less effort for its implementation. The developed algorithm is compared to than in Levitin and Lisnianski [Optimization of imperfect preventive maintenance for multi-state systems. Reliab Eng Syst Saf 2000;67:193-203] by using a reference example and two newly generated examples. This comparison shows that the extended great deluge gives the best solutions (i.e. those with minimal costs) for 8 instances among 10

  7. Stream Processing Using Grammars and Regular Expressions

    DEFF Research Database (Denmark)

    Rasmussen, Ulrik Terp

    disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...

  8. Machine-Learning Based Channel Quality and Stability Estimation for Stream-Based Multichannel Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Waqas Rehan

    2016-09-01

    . In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption for accommodating stream-based communication in multichannel WSNs.

  9. Machine-Learning Based Channel Quality and Stability Estimation for Stream-Based Multichannel Wireless Sensor Networks.

    Science.gov (United States)

    Rehan, Waqas; Fischer, Stefan; Rehan, Maaz

    2016-09-12

    , simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs.

  10. AMIDST: Analysis of MassIve Data STreams

    DEFF Research Database (Denmark)

    Masegosa, Andres; Martinez, Ana Maria; Borchani, Hanen

    2015-01-01

    The Analysis of MassIve Data STreams (AMIDST) Java toolbox provides a collection of scalable and parallel algorithms for inference and learning of hybrid Bayesian networks from data streams. The toolbox, available at http://amidst.github.io/toolbox/ under the Apache Software License version 2.......0, also efficiently leverages existing functionalities and algorithms by interfacing to software tools such as HUGIN and MOA....

  11. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    Science.gov (United States)

    Li, Chenhui; Baciu, George; Han, Yu

    2018-03-01

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  12. Data Stream Clustering With Affinity Propagation

    KAUST Repository

    Zhang, Xiangliang

    2014-07-09

    Data stream clustering provides insights into the underlying patterns of data flows. This paper focuses on selecting the best representatives from clusters of streaming data. There are two main challenges: how to cluster with the best representatives and how to handle the evolving patterns that are important characteristics of streaming data with dynamic distributions. We employ the Affinity Propagation (AP) algorithm presented in 2007 by Frey and Dueck for the first challenge, as it offers good guarantees of clustering optimality for selecting exemplars. The second challenging problem is solved by change detection. The presented StrAP algorithm combines AP with a statistical change point detection test; the clustering model is rebuilt whenever the test detects a change in the underlying data distribution. Besides the validation on two benchmark data sets, the presented algorithm is validated on a real-world application, monitoring the data flow of jobs submitted to the EGEE grid.

  13. Data Stream Clustering With Affinity Propagation

    KAUST Repository

    Zhang, Xiangliang; Furtlehner, Cyril; Germain-Renaud, Cecile; Sebag, Michele

    2014-01-01

    Data stream clustering provides insights into the underlying patterns of data flows. This paper focuses on selecting the best representatives from clusters of streaming data. There are two main challenges: how to cluster with the best representatives and how to handle the evolving patterns that are important characteristics of streaming data with dynamic distributions. We employ the Affinity Propagation (AP) algorithm presented in 2007 by Frey and Dueck for the first challenge, as it offers good guarantees of clustering optimality for selecting exemplars. The second challenging problem is solved by change detection. The presented StrAP algorithm combines AP with a statistical change point detection test; the clustering model is rebuilt whenever the test detects a change in the underlying data distribution. Besides the validation on two benchmark data sets, the presented algorithm is validated on a real-world application, monitoring the data flow of jobs submitted to the EGEE grid.

  14. A Distributed Flocking Approach for Information Stream Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL

    2006-01-01

    Intelligence analysts are currently overwhelmed with the amount of information streams generated everyday. There is a lack of comprehensive tool that can real-time analyze the information streams. Document clustering analysis plays an important role in improving the accuracy of information retrieval. However, most clustering technologies can only be applied for analyzing the static document collection because they normally require a large amount of computation resource and long time to get accurate result. It is very difficult to cluster a dynamic changed text information streams on an individual computer. Our early research has resulted in a dynamic reactive flock clustering algorithm which can continually refine the clustering result and quickly react to the change of document contents. This character makes the algorithm suitable for cluster analyzing dynamic changed document information, such as text information stream. Because of the decentralized character of this algorithm, a distributed approach is a very natural way to increase the clustering speed of the algorithm. In this paper, we present a distributed multi-agent flocking approach for the text information stream clustering and discuss the decentralized architectures and communication schemes for load balance and status information synchronization in this approach.

  15. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  16. An integrated extended Kalman filter–implicit level set algorithm for monitoring planar hydraulic fractures

    International Nuclear Information System (INIS)

    Peirce, A; Rochinha, F

    2012-01-01

    We describe a novel approach to the inversion of elasto-static tiltmeter measurements to monitor planar hydraulic fractures propagating within three-dimensional elastic media. The technique combines the extended Kalman filter (EKF), which predicts and updates state estimates using tiltmeter measurement time-series, with a novel implicit level set algorithm (ILSA), which solves the coupled elasto-hydrodynamic equations. The EKF and ILSA are integrated to produce an algorithm to locate the unknown fracture-free boundary. A scaling argument is used to derive a strategy to tune the algorithm parameters to enable measurement information to compensate for unmodeled dynamics. Synthetic tiltmeter data for three numerical experiments are generated by introducing significant changes to the fracture geometry by altering the confining geological stress field. Even though there is no confining stress field in the dynamic model used by the new EKF-ILSA scheme, it is able to use synthetic data to arrive at remarkably accurate predictions of the fracture widths and footprints. These experiments also explore the robustness of the algorithm to noise and to placement of tiltmeter arrays operating in the near-field and far-field regimes. In these experiments, the appropriate parameter choices and strategies to improve the robustness of the algorithm to significant measurement noise are explored. (paper)

  17. Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.

    Energy Technology Data Exchange (ETDEWEB)

    Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.

  18. Fine-Grained Rate Shaping for Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Chen Tsuhan

    2004-01-01

    Full Text Available Video streaming over wireless networks faces challenges of time-varying packet loss rate and fluctuating bandwidth. In this paper, we focus on streaming precoded video that is both source and channel coded. Dynamic rate shaping has been proposed to “shape” the precompressed video to adapt to the fluctuating bandwidth. In our earlier work, rate shaping was extended to shape the channel coded precompressed video, and to take into account the time-varying packet loss rate as well as the fluctuating bandwidth of the wireless networks. However, prior work on rate shaping can only adjust the rate oarsely. In this paper, we propose “fine-grained rate shaping (FGRS” to allow for bandwidth adaptation over a wide range of bandwidth and packet loss rate in fine granularities. The video is precoded with fine granularity scalability (FGS followed by channel coding. Utilizing the fine granularity property of FGS and channel coding, FGRS selectively drops part of the precoded video and still yields decodable bit-stream at the decoder. Moreover, FGRS optimizes video streaming rather than achieves heuristic objectives as conventional methods. A two-stage rate-distortion (RD optimization algorithm is proposed for FGRS. Promising results of FGRS are shown.

  19. EAES: Extended Advanced Encryption Standard with Extended Security

    Directory of Open Access Journals (Sweden)

    Abul Kalam Azad

    2018-05-01

    Full Text Available Though AES is the highest secure symmetric cipher at present, many attacks are now effective against AES too which is seen from the review of recent attacks of AES. This paper describes an extended AES algorithm with key sizes of 256, 384 and 512 bits with round numbers of 10, 12 and 14 respectively. Data block length is 128 bits, same as AES. But unlike AES each round of encryption and decryption of this proposed algorithm consists of five stages except the last one which consists of four stages. Unlike AES, this algorithm uses two different key expansion algorithms with two different round constants that ensure higher security than AES. Basically, this algorithm takes one cipher key and divides the selected key of two separate sub-keys: FirstKey and SecondKey. Then expand them through two different key expansion schedules. Performance analysis shows that the proposed extended AES algorithm takes almost same amount of time to encrypt and decrypt the same amount of data as AES but with higher security than AES.

  20. An extended continuous estimation of distribution algorithm for solving the permutation flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2017-11-01

    This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.

  1. Algorithm for Compressing Time-Series Data

    Science.gov (United States)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  2. A survey on Big Data Stream Mining

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... huge amount of stream like telecommunication systems. So, there ... streams have many challenges for data mining algorithm design like using of ..... A. Bifet and R. Gavalda, "Learning from Time-Changing Data with. Adaptive ...

  3. STRIP: stream learning of influence probabilities

    DEFF Research Database (Denmark)

    Kutzkov, Konstantin

    2013-01-01

    cascades, and developing applications such as viral marketing. Motivated by modern microblogging platforms, such as twitter, in this paper we study the problem of learning influence probabilities in a data-stream scenario, in which the network topology is relatively stable and the challenge of a learning...... algorithm is to keep up with a continuous stream of tweets using a small amount of time and memory. Our contribution is a number of randomized approximation algorithms, categorized according to the available space (superlinear, linear, and sublinear in the number of nodes n) and according to dierent models...

  4. An Extended Genetic Algorithm for Distributed Integration of Fuzzy Process Planning and Scheduling

    Directory of Open Access Journals (Sweden)

    Shuai Zhang

    2016-01-01

    Full Text Available The distributed integration of process planning and scheduling (DIPPS aims to simultaneously arrange the two most important manufacturing stages, process planning and scheduling, in a distributed manufacturing environment. Meanwhile, considering its advantage corresponding to actual situation, the triangle fuzzy number (TFN is adopted in DIPPS to represent the machine processing and transportation time. In order to solve this problem and obtain the optimal or near-optimal solution, an extended genetic algorithm (EGA with innovative three-class encoding method, improved crossover, and mutation strategies is proposed. Furthermore, a local enhancement strategy featuring machine replacement and order exchange is also added to strengthen the local search capability on the basic process of genetic algorithm. Through the verification of experiment, EGA achieves satisfactory results all in a very short period of time and demonstrates its powerful performance in dealing with the distributed integration of fuzzy process planning and scheduling (DIFPPS.

  5. Confidence range estimate of extended source imagery acquisition algorithms via computer simulations. [in optical communication systems

    Science.gov (United States)

    Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret

    1992-01-01

    Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.

  6. Towards automatic parameter tuning of stream processing systems

    KAUST Repository

    Bilal, Muhammad

    2017-09-27

    Optimizing the performance of big-data streaming applications has become a daunting and time-consuming task: parameters may be tuned from a space of hundreds or even thousands of possible configurations. In this paper, we present a framework for automating parameter tuning for stream-processing systems. Our framework supports standard black-box optimization algorithms as well as a novel gray-box optimization algorithm. We demonstrate the multiple benefits of automated parameter tuning in optimizing three benchmark applications in Apache Storm. Our results show that a hill-climbing algorithm that uses a new heuristic sampling approach based on Latin Hypercube provides the best results. Our gray-box algorithm provides comparable results while being two to five times faster.

  7. Knowledge discovery from data streams

    CERN Document Server

    Gama, Joao

    2010-01-01

    Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams.The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks,

  8. Extending the eigCG algorithm to nonsymmetric Lanczos for linear systems with multiple right-hand sides

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas

    2014-08-01

    The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.

  9. Extending the FairRoot framework to allow for simulation and reconstruction of free streaming data

    International Nuclear Information System (INIS)

    Al-Turany, M; Klein, D; Manafov, A; Rybalchenko, A; Uhlig, F

    2014-01-01

    The FairRoot framework is the standard framework for simulation, reconstruction and data analysis for the FAIR experiments. The framework is designed to optimise the accessibility for beginners and developers, to be flexible and to cope with future developments. FairRoot enhances the synergy between the different physics experiments. As a first step toward simulation of free streaming data, the time based simulation was introduced to the framework. The next step is the event source simulation. This is achieved via a client server system. After digitization the so called 'samplers' can be started, where sampler can read the data of the corresponding detector from the simulation files and make it available for the reconstruction clients. The system makes it possible to develop and validate the online reconstruction algorithms. In this work, the design and implementation of the new architecture and the communication layer will be described.

  10. Performance of the air2stream model that relates air and stream water temperatures depends on the calibration method

    Science.gov (United States)

    Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.

    2018-06-01

    A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.

  11. STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Geoffrey [Indiana Univ., Bloomington, IN (United States); Jha, Shantenu [Rutgers Univ., New Brunswick, NJ (United States); Ramakrishnan, Lavanya [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-10-01

    discusses four research directions driven by current and future application requirements reflecting the areas identified as important by STREAM2016. These include (i) Algorithms, (ii) Programming Models, Languages and Runtime Systems (iii) Human-in-the-loop and Steering in Scientific Workflow and (iv) Facilities.

  12. LHCb trigger streams optimization

    Science.gov (United States)

    Derkach, D.; Kazeev, N.; Neychev, R.; Panin, A.; Trofimov, I.; Ustyuzhanin, A.; Vesterinen, M.

    2017-10-01

    The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.

  13. Improved algorithms for approximate string matching (extended abstract

    Directory of Open Access Journals (Sweden)

    Papamichail Georgios

    2009-01-01

    Full Text Available Abstract Background The problem of approximate string matching is important in many different areas such as computational biology, text processing and pattern recognition. A great effort has been made to design efficient algorithms addressing several variants of the problem, including comparison of two strings, approximate pattern identification in a string or calculation of the longest common subsequence that two strings share. Results We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s - |n - m|·min(m, n, s + m + n and linear space, where s is the edit distance between the two strings. This worst-case time bound sets the quadratic factor of the algorithm independent of the longest string length and improves existing theoretical bounds for this problem. The implementation of our algorithm also excels in practice, especially in cases where the two strings compared differ significantly in length. Conclusion We have provided the design, analysis and implementation of a new algorithm for calculating the edit distance of two strings with both theoretical and practical implications. Source code of our algorithm is available online.

  14. A fast calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations

    Science.gov (United States)

    Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.

    2016-05-01

    Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.

  15. Data Stream Classification Based on the Gamma Classifier

    Directory of Open Access Journals (Sweden)

    Abril Valeria Uriarte-Arcia

    2015-01-01

    Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.

  16. Towards automatic parameter tuning of stream processing systems

    KAUST Repository

    Bilal, Muhammad; Canini, Marco

    2017-01-01

    for automating parameter tuning for stream-processing systems. Our framework supports standard black-box optimization algorithms as well as a novel gray-box optimization algorithm. We demonstrate the multiple benefits of automated parameter tuning in optimizing

  17. Layout Study and Application of Mobile App Recommendation Approach Based On Spark Streaming Framework

    Science.gov (United States)

    Wang, H. T.; Chen, T. T.; Yan, C.; Pan, H.

    2018-05-01

    For App recommended areas of mobile phone software, made while using conduct App application recommended combined weighted Slope One algorithm collaborative filtering algorithm items based on further improvement of the traditional collaborative filtering algorithm in cold start, data matrix sparseness and other issues, will recommend Spark stasis parallel algorithm platform, the introduction of real-time streaming streaming real-time computing framework to improve real-time software applications recommended.

  18. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    Science.gov (United States)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  19. Mining Building Metadata by Data Stream Comparison

    DEFF Research Database (Denmark)

    Holmegaard, Emil; Kjærgaard, Mikkel Baun

    2016-01-01

    to handle data streams with only slightly similar patterns. We have evaluated Metafier with points and data from one building located in Denmark. We have evaluated Metafier with 903 points, and the overall accuracy, with only 3 known examples, was 94.71%. Furthermore we found that using DTW for mining...... ways to annotate sensor and actuation points. This makes it difficult to create intuitive queries for retrieving data streams from points. Another problem is the amount of insufficient or missing metadata. We introduce Metafier, a tool for extracting metadata from comparing data streams. Metafier...... enables a semi-automatic labeling of metadata to building instrumentation. Metafier annotates points with metadata by comparing the data from a set of validated points with unvalidated points. Metafier has three different algorithms to compare points with based on their data. The three algorithms...

  20. Event Streams Clustering Using Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Hanen Bouali

    2015-10-01

    Full Text Available Data streams are usually of unbounded lengths which push users to consider only recent observations by focusing on a time window, and ignore past data. However, in many real world applications, past data must be taken in consideration to guarantee the efficiency, the performance of decision making and to handle data streams evolution over time. In order to build a selectively history to track the underlying event streams changes, we opt for the continuously data of the sliding window which increases the time window based on changes over historical data. In this paper, to have the ability to access to historical data without requiring any significant storage or multiple passes over the data. In this paper, we propose a new algorithm for clustering multiple data streams using incremental support vector machine and data representative points’ technique. The algorithm uses a sliding window model for the most recent clustering results and data representative points to model the old data clustering results. Our experimental results on electromyography signal show a better clustering than other present in the literature

  1. Modeling and clustering users with evolving profiles in usage streams

    KAUST Repository

    Zhang, Chongsheng; Masseglia, Florent; Zhang, Xiangliang

    2012-01-01

    Today, there is an increasing need of data stream mining technology to discover important patterns on the fly. Existing data stream models and algorithms commonly assume that users' records or profiles in data streams will not be updated or revised once they arrive. Nevertheless, in various applications such asWeb usage, the records/profiles of the users can evolve along time. This kind of streaming data evolves in two forms, the streaming of tuples or transactions as in the case of traditional data streams, and more importantly, the evolving of user records/profiles inside the streams. Such data streams bring difficulties on modeling and clustering for exploring users' behaviors. In this paper, we propose three models to summarize this kind of data streams, which are the batch model, the Evolving Objects (EO) model and the Dynamic Data Stream (DDS) model. Through creating, updating and deleting user profiles, these models summarize the behaviors of each user as a profile object. Based upon these models, clustering algorithms are employed to discover interesting user groups from the profile objects. We have evaluated all the proposed models on a large real-world data set, showing that the DDS model summarizes the data streams with evolving tuples more efficiently and effectively, and provides better basis for clustering users than the other two models. © 2012 IEEE.

  2. Modeling and clustering users with evolving profiles in usage streams

    KAUST Repository

    Zhang, Chongsheng

    2012-09-01

    Today, there is an increasing need of data stream mining technology to discover important patterns on the fly. Existing data stream models and algorithms commonly assume that users\\' records or profiles in data streams will not be updated or revised once they arrive. Nevertheless, in various applications such asWeb usage, the records/profiles of the users can evolve along time. This kind of streaming data evolves in two forms, the streaming of tuples or transactions as in the case of traditional data streams, and more importantly, the evolving of user records/profiles inside the streams. Such data streams bring difficulties on modeling and clustering for exploring users\\' behaviors. In this paper, we propose three models to summarize this kind of data streams, which are the batch model, the Evolving Objects (EO) model and the Dynamic Data Stream (DDS) model. Through creating, updating and deleting user profiles, these models summarize the behaviors of each user as a profile object. Based upon these models, clustering algorithms are employed to discover interesting user groups from the profile objects. We have evaluated all the proposed models on a large real-world data set, showing that the DDS model summarizes the data streams with evolving tuples more efficiently and effectively, and provides better basis for clustering users than the other two models. © 2012 IEEE.

  3. BLOSTREAM: A HIGH SPEED STREAM CIPHER

    Directory of Open Access Journals (Sweden)

    ALI H. KASHMAR

    2017-04-01

    Full Text Available Although stream ciphers are widely utilized to encrypt sensitive data at fast speeds, security concerns have led to a shift from stream to block ciphers, judging that the current technology in stream cipher is inferior to the technology of block ciphers. This paper presents the design of an improved efficient and secure stream cipher called Blostream, which is more secure than conventional stream ciphers that use XOR for mixing. The proposed cipher comprises two major components: the Pseudo Random Number Generator (PRNG using the Rabbit algorithm and a nonlinear invertible round function (combiner for encryption and decryption. We evaluate its performance in terms of implementation and security, presenting advantages and disadvantages, comparison of the proposed cipher with similar systems and a statistical test for randomness. The analysis shows that the proposed cipher is more efficient, high speed, and secure than current conventional stream ciphers.

  4. A Streaming Language Implementation of the Discontinuous Galerkin Method

    Science.gov (United States)

    Barth, Timothy; Knight, Timothy

    2005-01-01

    We present a Brook streaming language implementation of the 3-D discontinuous Galerkin method for compressible fluid flow on tetrahedral meshes. Efficient implementation of the discontinuous Galerkin method using the streaming model of computation introduces several algorithmic design challenges. Using a cycle-accurate simulator, performance characteristics have been obtained for the Stanford Merrimac stream processor. The current Merrimac design achieves 128 Gflops per chip and the desktop board is populated with 16 chips yielding a peak performance of 2 Teraflops. Total parts cost for the desktop board is less than $20K. Current cycle-accurate simulations for discretizations of the 3-D compressible flow equations yield approximately 40-50% of the peak performance of the Merrimac streaming processor chip. Ongoing work includes the assessment of the performance of the same algorithm on the 2 Teraflop desktop board with a target goal of achieving 1 Teraflop performance.

  5. Secure remote service execution for web media streaming

    OpenAIRE

    Mikityuk, Alexandra

    2017-01-01

    Through continuous advancements in streaming and Web technologies over the past decade, the Web has become a platform for media delivery. Web standards like HTML5 have been designed accordingly, allowing for the delivery of applications, high-quality streaming video, and hooks for interoperable content protection. Efficient video encoding algorithms such as AVC/HEVC and streaming protocols such as MPEG-DASH have served as additional triggers for this evolution. Users now employ...

  6. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  7. Sequential Specification of Time-aware Stream Processing Applications (Extended Abstract)

    NARCIS (Netherlands)

    Geuns, S.J.; Hausmans, J.P.H.M.; Bekooij, Marco Jan Gerrit

    2012-01-01

    Automatic parallelization of Nested Loop Programs (NLPs) is an attractive method to create embedded real-time stream processing applications for multi-core systems. However, the description and parallelization of applications with a time dependent functional behavior has not been considered in NLPs.

  8. Stretch-minimising stream surfaces

    KAUST Repository

    Barton, Michael; Kosinka, Jin; Calo, Victor M.

    2015-01-01

    We study the problem of finding stretch-minimising stream surfaces in a divergence-free vector field. These surfaces are generated by motions of seed curves that propagate through the field in a stretch minimising manner, i.e., they move without stretching or shrinking, preserving the length of their arbitrary arc. In general fields, such curves may not exist. How-ever, the divergence-free constraint gives rise to these 'stretch-free' curves that are locally arc-length preserving when infinitesimally propagated. Several families of stretch-free curves are identified and used as initial guesses for stream surface generation. These surfaces are subsequently globally optimised to obtain the best stretch-minimising stream surfaces in a given divergence-free vector field. Our algorithm was tested on benchmark datasets, proving its applicability to incompressible fluid flow simulations, where our stretch-minimising stream surfaces realistically reflect the flow of a flexible univariate object. © 2015 Elsevier Inc. All rights reserved.

  9. Stretch-minimising stream surfaces

    KAUST Repository

    Barton, Michael

    2015-05-01

    We study the problem of finding stretch-minimising stream surfaces in a divergence-free vector field. These surfaces are generated by motions of seed curves that propagate through the field in a stretch minimising manner, i.e., they move without stretching or shrinking, preserving the length of their arbitrary arc. In general fields, such curves may not exist. How-ever, the divergence-free constraint gives rise to these \\'stretch-free\\' curves that are locally arc-length preserving when infinitesimally propagated. Several families of stretch-free curves are identified and used as initial guesses for stream surface generation. These surfaces are subsequently globally optimised to obtain the best stretch-minimising stream surfaces in a given divergence-free vector field. Our algorithm was tested on benchmark datasets, proving its applicability to incompressible fluid flow simulations, where our stretch-minimising stream surfaces realistically reflect the flow of a flexible univariate object. © 2015 Elsevier Inc. All rights reserved.

  10. Evaluation of Stream Mining Classifiers for Real-Time Clinical Decision Support System: A Case Study of Blood Glucose Prediction in Diabetes Therapy

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Earlier on, a conceptual design on the real-time clinical decision support system (rt-CDSS with data stream mining was proposed and published. The new system is introduced that can analyze medical data streams and can make real-time prediction. This system is based on a stream mining algorithm called VFDT. The VFDT is extended with the capability of using pointers to allow the decision tree to remember the mapping relationship between leaf nodes and the history records. In this paper, which is a sequel to the rt-CDSS design, several popular machine learning algorithms are investigated for their suitability to be a candidate in the implementation of classifier at the rt-CDSS. A classifier essentially needs to accurately map the events inputted to the system into one of the several predefined classes of assessments, such that the rt-CDSS can follow up with the prescribed remedies being recommended to the clinicians. For a real-time system like rt-CDSS, the major technological challenges lie in the capability of the classifier to process, analyze and classify the dynamic input data, quickly and upmost reliably. An experimental comparison is conducted. This paper contributes to the insight of choosing and embedding a stream mining classifier into rt-CDSS with a case study of diabetes therapy.

  11. CC_TRS: Continuous Clustering of Trajectory Stream Data Based on Micro Cluster Life

    Directory of Open Access Journals (Sweden)

    Musaab Riyadh

    2017-01-01

    Full Text Available The rapid spreading of positioning devices leads to the generation of massive spatiotemporal trajectories data. In some scenarios, spatiotemporal data are received in stream manner. Clustering of stream data is beneficial for different applications such as traffic management and weather forecasting. In this article, an algorithm for Continuous Clustering of Trajectory Stream Data Based on Micro Cluster Life is proposed. The algorithm consists of two phases. There is the online phase where temporal micro clusters are used to store summarized spatiotemporal information for each group of similar segments. The clustering task in online phase is based on temporal micro cluster lifetime instead of time window technique which divides stream data into time bins and clusters each bin separately. For offline phase, a density based clustering approach is used to generate macro clusters depending on temporal micro clusters. The evaluation of the proposed algorithm on real data sets shows the efficiency and the effectiveness of the proposed algorithm and proved it is efficient alternative to time window technique.

  12. Implementation of a Multichannel Serial Data Streaming Algorithm using the Xilinx Serial RapidIO Solution

    Science.gov (United States)

    Doxley, Charles A.

    2016-01-01

    In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.

  13. Computing Diameter in the Streaming and Sliding-Window Models (Preprint)

    National Research Council Canada - National Science Library

    Feigenbaum, Joan; Kannan, Sampath; Zhang, Jian

    2002-01-01

    We investigate the diameter problem in the streaming and sliding-window models. We show that, for a stream of n points or a sliding window of size n, any exact algorithm for diameter requires Omega(n) bits of space...

  14. A GA-P algorithm to automatically formulate extended Boolean queries for a fuzzy information retrieval system

    OpenAIRE

    Cordón García, Oscar; Moya Anegón, Félix de; Zarco Fernández, Carmen

    2000-01-01

    [ES] Although the fuzzy retrieval model constitutes a powerful extension of the boolean one, being able to deal with the imprecision and subjectivity existing in the Information Retrieval process, users are not usually able to express their query requirements in the form of an extended boolean query including weights. To solve this problem, different tools to assist the user in the query formulation have been proposed. In this paper, the genetic algorithm-programming technique is considered t...

  15. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing; Pang, Chaoyi; Zhou, Xiaofang; Zhang, Xiangliang; Deng, Ke

    2014-01-01

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  16. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing

    2014-04-04

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  17. Removal of sulfur from process streams

    International Nuclear Information System (INIS)

    Brignac, D.G.

    1984-01-01

    A process wherein water is added to a non-reactive gas stream, preferably a hydrogen or hydrogen-containing gas stream, sufficient to raise the water level thereof to from about 0.2 percent to about 50 percent, based on the total volume of the process gas stream, and the said moist gas stream is contacted, at elevated temperature, with a particulate mass of a sulfur-bearing metal alumina spinel characterized by the formula MAl 2 O 4 , wherein M is chromium, iron, cobalt, nickel, copper, cadmium, mercury, or zinc to desorb sulfur thereon. In the sulfur sorption cycle, due to the simultaneous adsorption of water and sulfur, the useful life of the metal alumina spinel for sulfur adsorption can be extended, and the sorbent made more easily regenerable after contact with a sulfur-bearing gas stream, notably sulfur-bearing wet hydrogen or wet hydrogen-rich gas streams

  18. Design and implementation of streaming media server cluster based on FFMpeg.

    Science.gov (United States)

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.

  19. Design and Implementation of Streaming Media Server Cluster Based on FFMpeg

    Science.gov (United States)

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187

  20. Mapping a lateralisation gradient within the ventral stream for auditory speech perception

    OpenAIRE

    Karsten eSpecht

    2013-01-01

    Recent models on speech perception propose a dual stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend towards the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus...

  1. Mapping a lateralization gradient within the ventral stream for auditory speech perception

    OpenAIRE

    Specht, Karsten

    2013-01-01

    Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus....

  2. Mining top-k frequent closed itemsets in data streams using sliding window

    International Nuclear Information System (INIS)

    Rehman, Z.; Shahbaz, M.

    2013-01-01

    Frequent itemset mining has become a popular research area in data mining community since the last few years. T here are two main technical hitches while finding frequent itemsets. First, to provide an appropriate minimum support value to start and user need to tune this minimum support value by running the algorithm again and again. Secondly, generated frequent itemsets are mostly numerous and as a result a number of association rules generated are also very large in numbers. Applications dealing with streaming environment need to process the data received at high rate, therefore, finding frequent itemsets in data streams becomes complex. In this paper, we present an algorithm to mine top-k frequent closed itemsets using sliding window approach from streaming data. We developed a single-pass algorithm to find frequent closed itemsets of length between user's defined minimum and maximum- length. To improve the performance of algorithm and to avoid rescanning of data, we have transformed data into bitmap based tree data structure. (author)

  3. An Association-Oriented Partitioning Approach for Streaming Graph Query

    Directory of Open Access Journals (Sweden)

    Yun Hao

    2017-01-01

    Full Text Available The volumes of real-world graphs like knowledge graph are increasing rapidly, which makes streaming graph processing a hot research area. Processing graphs in streaming setting poses significant challenges from different perspectives, among which graph partitioning method plays a key role. Regarding graph query, a well-designed partitioning method is essential for achieving better performance. Existing offline graph partitioning methods often require full knowledge of the graph, which is not possible during streaming graph processing. In order to handle this problem, we propose an association-oriented streaming graph partitioning method named Assc. This approach first computes the rank values of vertices with a hybrid approximate PageRank algorithm. After splitting these vertices with an adapted variant affinity propagation algorithm, the process order on vertices in the sliding window can be determined. Finally, according to the level of these vertices and their association, the partition where the vertices should be distributed is decided. We compare its performance with a set of streaming graph partition methods and METIS, a widely adopted offline approach. The results show that our solution can partition graphs with hundreds of millions of vertices in streaming setting on a large collection of graph datasets and our approach outperforms other graph partitioning methods.

  4. Mutual Information Based Dynamic Integration of Multiple Feature Streams for Robust Real-Time LVCSR

    Science.gov (United States)

    Sato, Shoei; Kobayashi, Akio; Onoe, Kazuo; Homma, Shinichi; Imai, Toru; Takagi, Tohru; Kobayashi, Tetsunori

    We present a novel method of integrating the likelihoods of multiple feature streams, representing different acoustic aspects, for robust speech recognition. The integration algorithm dynamically calculates a frame-wise stream weight so that a higher weight is given to a stream that is robust to a variety of noisy environments or speaking styles. Such a robust stream is expected to show discriminative ability. A conventional method proposed for the recognition of spoken digits calculates the weights front the entropy of the whole set of HMM states. This paper extends the dynamic weighting to a real-time large-vocabulary continuous speech recognition (LVCSR) system. The proposed weight is calculated in real-time from mutual information between an input stream and active HMM states in a searchs pace without an additional likelihood calculation. Furthermore, the mutual information takes the width of the search space into account by calculating the marginal entropy from the number of active states. In this paper, we integrate three features that are extracted through auditory filters by taking into account the human auditory system's ability to extract amplitude and frequency modulations. Due to this, features representing energy, amplitude drift, and resonant frequency drifts, are integrated. These features are expected to provide complementary clues for speech recognition. Speech recognition experiments on field reports and spontaneous commentary from Japanese broadcast news showed that the proposed method reduced error words by 9.2% in field reports and 4.7% in spontaneous commentaries relative to the best result obtained from a single stream.

  5. Maximization Network Throughput Based on Improved Genetic Algorithm and Network Coding for Optical Multicast Networks

    Science.gov (United States)

    Wei, Chengying; Xiong, Cuilian; Liu, Huanlin

    2017-12-01

    Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.

  6. nitrogen saturation in stream ecosystems

    OpenAIRE

    Earl, S. R.; Valett, H. M.; Webster, J. R.

    2006-01-01

    The concept of nitrogen (N) saturation has organized the assessment of N loading in terrestrial ecosystems. Here we extend the concept to lotic ecosystems by coupling Michaelis-Menten kinetics and nutrient spiraling. We propose a series of saturation response types, which may be used to characterize the proximity of streams to N saturation. We conducted a series of short-term N releases using a tracer ((NO3)-N-15-N) to measure uptake. Experiments were conducted in streams spanning a gradient ...

  7. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  8. Nitrogen saturation in stream ecosystems.

    Science.gov (United States)

    Earl, Stevan R; Valett, H Maurice; Webster, Jackson R

    2006-12-01

    The concept of nitrogen (N) saturation has organized the assessment of N loading in terrestrial ecosystems. Here we extend the concept to lotic ecosystems by coupling Michaelis-Menten kinetics and nutrient spiraling. We propose a series of saturation response types, which may be used to characterize the proximity of streams to N saturation. We conducted a series of short-term N releases using a tracer (15NO3-N) to measure uptake. Experiments were conducted in streams spanning a gradient of background N concentration. Uptake increased in four of six streams as NO3-N was incrementally elevated, indicating that these streams were not saturated. Uptake generally corresponded to Michaelis-Menten kinetics but deviated from the model in two streams where some other growth-critical factor may have been limiting. Proximity to saturation was correlated to background N concentration but was better predicted by the ratio of dissolved inorganic N (DIN) to soluble reactive phosphorus (SRP), suggesting phosphorus limitation in several high-N streams. Uptake velocity, a reflection of uptake efficiency, declined nonlinearly with increasing N amendment in all streams. At the same time, uptake velocity was highest in the low-N streams. Our conceptual model of N transport, uptake, and uptake efficiency suggests that, while streams may be active sites of N uptake on the landscape, N saturation contributes to nonlinear changes in stream N dynamics that correspond to decreased uptake efficiency.

  9. Streaming simplification of tetrahedral meshes.

    Science.gov (United States)

    Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T

    2007-01-01

    Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.

  10. Stream Tracker: Crowd sourcing and remote sensing to monitor stream flow intermittence

    Science.gov (United States)

    Puntenney, K.; Kampf, S. K.; Newman, G.; Lefsky, M. A.; Weber, R.; Gerlich, J.

    2017-12-01

    Streams that do not flow continuously in time and space support diverse aquatic life and can be critical contributors to downstream water supply. However, these intermittent streams are rarely monitored and poorly mapped. Stream Tracker is a community powered stream monitoring project that pairs citizen contributed observations of streamflow presence or absence with a network of streamflow sensors and remotely sensed data from satellites to track when and where water is flowing in intermittent stream channels. Citizens can visit sites on roads and trails to track flow and contribute their observations to the project site hosted by CitSci.org. Data can be entered using either a mobile application with offline capabilities or an online data entry portal. The sensor network provides a consistent record of streamflow and flow presence/absence across a range of elevations and drainage areas. Capacitance, resistance, and laser sensors have been deployed to determine the most reliable, low cost sensor that could be mass distributed to track streamflow intermittence over a larger number of sites. Streamflow presence or absence observations from the citizen and sensor networks are then compared to satellite imagery to improve flow detection algorithms using remotely sensed data from Landsat. In the first two months of this project, 1,287 observations have been made at 241 sites by 24 project members across northern and western Colorado.

  11. The entity-to-algorithm allocation problem: Extending the analysis

    CSIR Research Space (South Africa)

    Grobler, J

    2014-12-01

    Full Text Available . HYPOTHESES ANALYSIS OF ALTERNATIVE MULTI-METHOD ALGORITHMS. HMHH EIHH EEA-SLPS HMHH NA 4− 19− 5 11− 8− 9 EIHH 5− 19− 4 NA 6− 16− 6 EEA-SLPS 9− 8− 11 6− 16− 6 NA Multi-EA 3− 4− 21 3− 1− 24 2− 3− 23 Multi-EA TOTAL HMHH 21− 4− 3 36− 3− 17 EIHH 24− 1− 3 35− 36... ANALYSIS OF THE VARIOUS ALGORITHMS VERSUS THEIR CONSTITUENT ALGORITHMS. Algorithm HMHH EIHH EEA-SLPS Multi-EA CMAES 0-3-25 4-2-22 4-2-22 2-2-24 SaNSDE 17-2-9 16-8-4 12-12-4 5-0-23 GA 22-3-3 23-2-3 23-4-1 4-5-19 GCPSO 20-1-7 20-3-5 19-3-6 8-3-17 TOTAL 55...

  12. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  13. StreamSqueeze: a dynamic stream visualization for monitoring of event data

    Science.gov (United States)

    Mansmann, Florian; Krstajic, Milos; Fischer, Fabian; Bertini, Enrico

    2012-01-01

    While in clear-cut situations automated analytical solution for data streams are already in place, only few visual approaches have been proposed in the literature for exploratory analysis tasks on dynamic information. However, due to the competitive or security-related advantages that real-time information gives in domains such as finance, business or networking, we are convinced that there is a need for exploratory visualization tools for data streams. Under the conditions that new events have higher relevance and that smooth transitions enable traceability of items, we propose a novel dynamic stream visualization called StreamSqueeze. In this technique the degree of interest of recent items is expressed through an increase in size and thus recent events can be shown with more details. The technique has two main benefits: First, the layout algorithm arranges items in several lists of various sizes and optimizes the positions within each list so that the transition of an item from one list to the other triggers least visual changes. Second, the animation scheme ensures that for 50 percent of the time an item has a static screen position where reading is most effective and then continuously shrinks and moves to the its next static position in the subsequent list. To demonstrate the capability of our technique, we apply it to large and high-frequency news and syslog streams and show how it maintains optimal stability of the layout under the conditions given above.

  14. EAES: Extended Advanced Encryption Standard with Extended Security

    OpenAIRE

    Abul Kalam Azad; Md. Yamin Mollah

    2018-01-01

    Though AES is the highest secure symmetric cipher at present, many attacks are now effective against AES too which is seen from the review of recent attacks of AES. This paper describes an extended AES algorithm with key sizes of 256, 384 and 512 bits with round numbers of 10, 12 and 14 respectively. Data block length is 128 bits, same as AES. But unlike AES each round of encryption and decryption of this proposed algorithm consists of five stages except the last one which consists of four st...

  15. Extended Adaptive Biasing Force Algorithm. An On-the-Fly Implementation for Accurate Free-Energy Calculations.

    Science.gov (United States)

    Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng

    2016-08-09

    Proper use of the adaptive biasing force (ABF) algorithm in free-energy calculations needs certain prerequisites to be met, namely, that the Jacobian for the metric transformation and its first derivative be available and the coarse variables be independent and fully decoupled from any holonomic constraint or geometric restraint, thereby limiting singularly the field of application of the approach. The extended ABF (eABF) algorithm circumvents these intrinsic limitations by applying the time-dependent bias onto a fictitious particle coupled to the coarse variable of interest by means of a stiff spring. However, with the current implementation of eABF in the popular molecular dynamics engine NAMD, a trajectory-based post-treatment is necessary to derive the underlying free-energy change. Usually, such a posthoc analysis leads to a decrease in the reliability of the free-energy estimates due to the inevitable loss of information, as well as to a drop in efficiency, which stems from substantial read-write accesses to file systems. We have developed a user-friendly, on-the-fly code for performing eABF simulations within NAMD. In the present contribution, this code is probed in eight illustrative examples. The performance of the algorithm is compared with traditional ABF, on the one hand, and the original eABF implementation combined with a posthoc analysis, on the other hand. Our results indicate that the on-the-fly eABF algorithm (i) supplies the correct free-energy landscape in those critical cases where the coarse variables at play are coupled to either each other or to geometric restraints or holonomic constraints, (ii) greatly improves the reliability of the free-energy change, compared to the outcome of a posthoc analysis, and (iii) represents a negligible additional computational effort compared to regular ABF. Moreover, in the proposed implementation, guidelines for choosing two parameters of the eABF algorithm, namely the stiffness of the spring and the mass

  16. The ClusTree : indexing micro-clusters for anytime stream mining

    DEFF Research Database (Denmark)

    Kranen, Philipp; Assent, Ira; Baldauf, Corinna

    2011-01-01

    -arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter-free algorithm that automatically adapts...... introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Additionally we present solutions to handle very fast streams through aggregation mechanisms and propose novel descent strategies that improve the clustering result on slower streams as long as time permits...

  17. Extending Wireless Rechargeable Sensor Network Life without Full Knowledge.

    Science.gov (United States)

    Najeeb, Najeeb W; Detweiler, Carrick

    2017-07-17

    When extending the life of Wireless Rechargeable Sensor Networks (WRSN), one challenge is charging networks as they grow larger. Overcoming this limitation will render a WRSN more practical and highly adaptable to growth in the real world. Most charging algorithms require a priori full knowledge of sensor nodes' power levels in order to determine the nodes that require charging. In this work, we present a probabilistic algorithm that extends the life of scalable WRSN without a priori power knowledge and without full network exploration. We develop a probability bound on the power level of the sensor nodes and utilize this bound to make decisions while exploring a WRSN. We verify the algorithm by simulating a wireless power transfer unmanned aerial vehicle, and charging a WRSN to extend its life. Our results show that, without knowledge, our proposed algorithm extends the life of a WRSN on average 90% of what an optimal full knowledge algorithm can achieve. This means that the charging robot does not need to explore the whole network, which enables the scaling of WRSN. We analyze the impact of network parameters on our algorithm and show that it is insensitive to a large range of parameter values.

  18. New Parallel Algorithms for Landscape Evolution Model

    Science.gov (United States)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  19. Dynamic Programming Optimization of Multi-rate Multicast Video-Streaming Services

    Directory of Open Access Journals (Sweden)

    Nestor Michael Caños Tiglao

    2010-06-01

    Full Text Available In large scale IP Television (IPTV and Mobile TV distributions, the video signal is typically encoded and transmitted using several quality streams, over IP Multicast channels, to several groups of receivers, which are classified in terms of their reception rate. As the number of video streams is usually constrained by both the number of TV channels and the maximum capacity of the content distribution network, it is necessary to find the selection of video stream transmission rates that maximizes the overall user satisfaction. In order to efficiently solve this problem, this paper proposes the Dynamic Programming Multi-rate Optimization (DPMO algorithm. The latter was comparatively evaluated considering several user distributions, featuring different access rate patterns. The experimental results reveal that DPMO is significantly more efficient than exhaustive search, while presenting slightly higher execution times than the non-optimal Multi-rate Step Search (MSS algorithm.

  20. Ensemble Classification of Data Streams Based on Attribute Reduction and a Sliding Window

    Directory of Open Access Journals (Sweden)

    Yingchun Chen

    2018-04-01

    Full Text Available With the current increasing volume and dimensionality of data, traditional data classification algorithms are unable to satisfy the demands of practical classification applications of data streams. To deal with noise and concept drift in data streams, we propose an ensemble classification algorithm based on attribute reduction and a sliding window in this paper. Using mutual information, an approximate attribute reduction algorithm based on rough sets is used to reduce data dimensionality and increase the diversity of reduced results in the algorithm. A double-threshold concept drift detection method and a three-stage sliding window control strategy are introduced to improve the performance of the algorithm when dealing with both noise and concept drift. The classification precision is further improved by updating the base classifiers and their nonlinear weights. Experiments on synthetic datasets and actual datasets demonstrate the performance of the algorithm in terms of classification precision, memory use, and time efficiency.

  1. VPipe: Virtual Pipelining for Scheduling of DAG Stream Query Plans

    Science.gov (United States)

    Wang, Song; Gupta, Chetan; Mehta, Abhay

    There are data streams all around us that can be harnessed for tremendous business and personal advantage. For an enterprise-level stream processing system such as CHAOS [1] (Continuous, Heterogeneous Analytic Over Streams), handling of complex query plans with resource constraints is challenging. While several scheduling strategies exist for stream processing, efficient scheduling of complex DAG query plans is still largely unsolved. In this paper, we propose a novel execution scheme for scheduling complex directed acyclic graph (DAG) query plans with meta-data enriched stream tuples. Our solution, called Virtual Pipelined Chain (or VPipe Chain for short), effectively extends the "Chain" pipelining scheduling approach to complex DAG query plans.

  2. Modified temporal approach to meta-optimizing an extended Kalman filter's parameters

    CSIR Research Space (South Africa)

    Salmon

    2014-07-01

    Full Text Available stream_source_info Salmon_2014.pdf.txt stream_content_type text/plain stream_size 1233 Content-Encoding UTF-8 stream_name Salmon_2014.pdf.txt Content-Type text/plain; charset=UTF-8 2014 IEEE International Geoscience... and Remote Sensing Symposium, Québec, Canada, 13-18 July 2014 A modified temporal approach to meta-optimizing an Extended Kalman Filter's parameters B. P. Salmon ; W. Kleynhans ; J. C. Olivier ; W. C. Olding ; K. J. Wessels ; F. van den Bergh...

  3. A high-precision algorithm for axisymmetric flow

    Directory of Open Access Journals (Sweden)

    A. Gokhman

    1995-01-01

    Full Text Available We present a new algorithm for highly accurate computation of axisymmetric potential flow. The principal feature of the algorithm is the use of orthogonal curvilinear coordinates. These coordinates are used to write down the equations and to specify quadrilateral elements following the boundary. In particular, boundary conditions for the Stokes' stream-function are satisfied exactly. The velocity field is determined by differentiating the stream-function. We avoid the use of quadratures in the evaluation of Galerkin integrals, and instead use splining of the boundaries of elements to take the double integrals of the shape functions in closed form. This is very accurate and not time consuming.

  4. STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS

    Science.gov (United States)

    2018-02-15

    PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS 5a. CONTRACT NUMBER FA8750-14-2-0072 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...of Figures 1 The 3D processing pipeline flowchart showing key modules. . . . . . . . . . . . . . . . . 12 2 Overall view (data flow) of the proposed...pipeline flowchart showing key modules. from motion and bundle adjustment algorithm. By fusion of depth masks of the scene obtained from 3D

  5. Frequent Pairs in Data Streams: Exploiting Parallelism and Skew

    DEFF Research Database (Denmark)

    Campagna, Andrea; Kutzkow, Konstantin; Pagh, Rasmus

    2011-01-01

    We introduce the Pair Streaming Engine (PairSE) that detects frequent pairs in a data stream of transactions. Our algorithm finds the most frequent pairs with high probability, and gives tight bounds on their frequency. It is particularly space efficient for skewed distribution of pair supports...... items mining in data streams. We show how to efficiently scale these approaches to handle large transactions. We report experimental results showcasing precision and recall of our method. In particular, we find that often our method achieves excellent precision, returning identical upper and lower...... bounds on the supports of the most frequent pairs....

  6. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and

  7. Zips : mining compressing sequential patterns in streams

    NARCIS (Netherlands)

    Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.

    2013-01-01

    We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be

  8. Algorithms for image recovery calculation in extended single-shot phase-shifting digital holography

    Science.gov (United States)

    Hasegawa, Shin-ya; Hirata, Ryo

    2018-04-01

    The single-shot phase-shifting method of image recovery using an inclined reference wave has the advantages of reducing the effects of vibration, being capable of operating in real time, and affording low-cost sensing. In this method, relatively low reference angles compared with that in the conventional method using phase shift between three or four pixels has been required. We propose an extended single-shot phase-shifting technique which uses the multiple-step phase-shifting algorithm and the corresponding multiple pixels which are the same as that of the period of an interference fringe. We have verified the theory underlying this recovery method by means of Fourier spectral analysis and its effectiveness by evaluating the visibility of the image using a high-resolution pattern. Finally, we have demonstrated high-contrast image recovery experimentally using a resolution chart. This method can be used in a variety of applications such as color holographic interferometry.

  9. Reacting to different types of concept drift: the Accuracy Updated Ensemble algorithm.

    Science.gov (United States)

    Brzezinski, Dariusz; Stefanowski, Jerzy

    2014-01-01

    Data stream mining has been receiving increased attention due to its presence in a wide range of applications, such as sensor networks, banking, and telecommunication. One of the most important challenges in learning from data streams is reacting to concept drift, i.e., unforeseen changes of the stream's underlying data distribution. Several classification algorithms that cope with concept drift have been put forward, however, most of them specialize in one type of change. In this paper, we propose a new data stream classifier, called the Accuracy Updated Ensemble (AUE2), which aims at reacting equally well to different types of drift. AUE2 combines accuracy-based weighting mechanisms known from block-based ensembles with the incremental nature of Hoeffding Trees. The proposed algorithm is experimentally compared with 11 state-of-the-art stream methods, including single classifiers, block-based and online ensembles, and hybrid approaches in different drift scenarios. Out of all the compared algorithms, AUE2 provided best average classification accuracy while proving to be less memory consuming than other ensemble approaches. Experimental results show that AUE2 can be considered suitable for scenarios, involving many types of drift as well as static environments.

  10. Efficient Processing of Continuous Skyline Query over Smarter Traffic Data Stream for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wang Hanning

    2013-01-01

    Full Text Available The analyzing and processing of multisource real-time transportation data stream lay a foundation for the smart transportation's sensibility, interconnection, integration, and real-time decision making. Strong computing ability and valid mass data management mode provided by the cloud computing, is feasible for handling Skyline continuous query in the mass distributed uncertain transportation data stream. In this paper, we gave architecture of layered smart transportation about data processing, and we formalized the description about continuous query over smart transportation data Skyline. Besides, we proposed mMR-SUDS algorithm (Skyline query algorithm of uncertain transportation stream data based on micro-batchinMap Reduce based on sliding window division and architecture.

  11. A folding algorithm for extended RNA secondary structures.

    Science.gov (United States)

    Höner zu Siederdissen, Christian; Bernhart, Stephan H; Stadler, Peter F; Hofacker, Ivo L

    2011-07-01

    RNA secondary structure contains many non-canonical base pairs of different pair families. Successful prediction of these structural features leads to improved secondary structures with applications in tertiary structure prediction and simultaneous folding and alignment. We present a theoretical model capturing both RNA pair families and extended secondary structure motifs with shared nucleotides using 2-diagrams. We accompany this model with a number of programs for parameter optimization and structure prediction. All sources (optimization routines, RNA folding, RNA evaluation, extended secondary structure visualization) are published under the GPLv3 and available at www.tbi.univie.ac.at/software/rnawolf/.

  12. New Splitting Criteria for Decision Trees in Stationary Data Streams.

    Science.gov (United States)

    Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Rutkowski, Leszek; Duda, Piotr; Jaworski, Maciej

    2018-06-01

    The most popular tools for stream data mining are based on decision trees. In previous 15 years, all designed methods, headed by the very fast decision tree algorithm, relayed on Hoeffding's inequality and hundreds of researchers followed this scheme. Recently, we have demonstrated that although the Hoeffding decision trees are an effective tool for dealing with stream data, they are a purely heuristic procedure; for example, classical decision trees such as ID3 or CART cannot be adopted to data stream mining using Hoeffding's inequality. Therefore, there is an urgent need to develop new algorithms, which are both mathematically justified and characterized by good performance. In this paper, we address this problem by developing a family of new splitting criteria for classification in stationary data streams and investigating their probabilistic properties. The new criteria, derived using appropriate statistical tools, are based on the misclassification error and the Gini index impurity measures. The general division of splitting criteria into two types is proposed. Attributes chosen based on type- splitting criteria guarantee, with high probability, the highest expected value of split measure. Type- criteria ensure that the chosen attribute is the same, with high probability, as it would be chosen based on the whole infinite data stream. Moreover, in this paper, two hybrid splitting criteria are proposed, which are the combinations of single criteria based on the misclassification error and Gini index.

  13. THE PAL 5 STAR STREAM GAPS

    International Nuclear Information System (INIS)

    Carlberg, R. G.; Hetherington, Nathan; Grillmair, C. J.

    2012-01-01

    Pal 5 is a low-mass, low-velocity-dispersion, globular cluster with spectacular tidal tails. We use the Sloan Digital Sky Survey Data Release 8 data to extend the density measurements of the trailing star stream to 23 deg distance from the cluster, at which point the stream runs off the edge of the available sky coverage. The size and the number of gaps in the stream are measured using a filter which approximates the structure of the gaps found in stream simulations. We find 5 gaps that are at least 99% confidence detections with about a dozen gaps at 90% confidence. The statistical significance of a gap is estimated using bootstrap resampling of the control regions on either side of the stream. The density minimum closest to the cluster is likely the result of the epicyclic orbits of the tidal outflow and has been discounted. To create the number of 99% confidence gaps per unit length at the mean age of the stream requires a halo population of nearly a thousand dark matter sub-halos with peak circular velocities above 1 km s –1 within 30 kpc of the galactic center. These numbers are a factor of about three below cold stream simulation at this sub-halo mass or velocity but, given the uncertainties in both measurement and more realistic warm stream modeling, are in substantial agreement with the LCDM prediction.

  14. Solvability of Extended General Strongly Mixed Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Balwant Singh Thakur

    2013-10-01

    Full Text Available In this paper, a new class of extended general strongly mixed variational inequalities is introduced and studied in Hilbert spaces. An existence theorem of solution is established and using resolvent operator technique, a new iterative algorithm for solving the extended general strongly mixed variational inequality is suggested. A convergence result for the iterative sequence generated by the new algorithm is also established.

  15. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    Science.gov (United States)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  16. Clustering big data streams : recent challenges and contributions

    NARCIS (Netherlands)

    Hassani, M.; Seidl, T.

    Traditional clustering algorithms merely considered static data. Today's various applications and research issues in big data mining have however to deal with continuous, possibly infinite streams of data, arriving at high velocity. Web traffic data, surveillance data, sensor measurements and stock

  17. Fish diversity in adjacent ambient, thermal, and post-thermal freshwater streams

    International Nuclear Information System (INIS)

    McFarlane, R.W.

    1976-01-01

    The Savannah River Plant area is drained by five streams of various sizes and thermal histories. One has never been thermally stressed, two presently receive thermal effluent, and two formerly received thermal effluent from nuclear production reactors. Sixty-four species of fishes are known to inhabit these streams; 55 species is the highest number obtained from any one stream. Thermal effluent in small streams excludes fish during periods of high temperatures, but the streams are rapidly reinvaded when temperatures subside below lethal limits. Some cyprinids become extinct in nonthermal tributaries upstream from the thermal effluents after extended periods of thermal stress. This extinction is similar to that which follows stream impoundment. Post-thermal streams rapidly recover their fish diversity and abundance. The alteration of the streambed and removal of overhead canopy may change the stream characteristics and modify the post-thermal fish fauna

  18. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation

  19. Sound stream segregation: a neuromorphic approach to solve the ‘cocktail party problem’ in real-time

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2015-09-01

    Full Text Available The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the ‘cocktail party effect’. It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA. This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR of the segregated stream (90, 77 and 55 dB for simple tone, complex tone and speech, respectively as compared to the SNR of the mixture waveform (0 dB. This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for

  20. On finding similar items in a stream of transactions

    DEFF Research Database (Denmark)

    Campagna, Andrea; Pagh, Rasmus

    2010-01-01

    While there has been a lot of work on finding frequent itemsets in transaction data streams, none of these solve the problem of finding similar pairs according to standard similarity measures. This paper is a first attempt at dealing with this, arguably more important, problem. We start out with ...... in random order, and show that surprisingly, not only is small-space similarity mining possible for the most common similarity measures, but the mining accuracy {\\em improves\\/} with the length of the stream for any fixed support threshold....... with a negative result that also explains the lack of theoretical upper bounds on the space usage of data mining algorithms for finding frequent itemsets: Any algorithm that (even only approximately and with a chance of error) finds the most frequent $k$-itemset must use space $\\Omega...

  1. Parallel algorithms for numerical linear algebra

    CERN Document Server

    van der Vorst, H

    1990-01-01

    This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p

  2. Q-Method Extended Kalman Filter

    Science.gov (United States)

    Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.

    2012-01-01

    A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.

  3. Alignment data streams for the ATLAS inner detector

    International Nuclear Information System (INIS)

    Pinto, B; Amorim, A; Pereira, P; Elsing, M; Hawkings, R; Schieck, J; Garcia, S; Schaffer, A; Ma, H; Anjos, A

    2008-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to reduce the Event Filter rate output, down to a level that allows the storage and processing of these data. These concepts are described in the ATLAS Computing Model which embraces Grid paradigm. The output coming from the Event Filter consists of four main streams: physical stream, express stream, calibration stream, and diagnostic stream. The calibration stream will be transferred to the Tier-0 facilities that will provide the prompt reconstruction of this stream with a minimum latency of 8 hours, producing calibration constants of sufficient quality to allow a first-pass processing. The Inner Detector community is developing and testing an independent common calibration stream selected at the Event Filter after track reconstruction. It is composed of raw data, in byte-stream format, contained in Readout Buffers (ROBs) with hit information of the selected tracks, and it will be used to derive and update a set of calibration and alignment constants. This option was selected because it makes use of the Byte Stream Converter infrastructure and possibly gives better bandwidth usage and storage optimization. Processing is done using specialized algorithms running in the Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. This work is addressing in particular the alignment requirements, the needs for track and hit selection, and the performance issues

  4. Alignment data stream for the ATLAS inner detector

    International Nuclear Information System (INIS)

    Pinto, B

    2010-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to achieve the necessary Event Filter rate output, making possible to optimize the storage and processing needs of these data. These needs are described in the ATLAS Computing Model, which embraces Grid concepts. The output coming from the Event Filter will consist of three main streams: a primary stream, the express stream and the calibration stream. The calibration stream will be transferred to the Tier-0 facilities which will allow the prompt reconstruction of this stream with an admissible latency of 8 hours, producing calibration constants of sufficient quality to permit a first-pass processing. An independent calibration stream is developed and tested, which selects tracks at the level-2 trigger (LVL2) after the reconstruction. The stream is composed of raw data, in byte-stream format, and contains only information of the relevant parts of the detector, in particular the hit information of the selected tracks. This leads to a significantly improved bandwidth usage and storage capability. The stream will be used to derive and update the calibration and alignment constants if necessary every 24h. Processing is done using specialized algorithms running in Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. The work is addressing in particular the alignment requirements, the needs for track and hit selection, timing and bandwidth issues.

  5. Multiscale Models for the Two-Stream Instability

    Science.gov (United States)

    Joseph, Ilon; Dimits, Andris; Banks, Jeffrey; Berger, Richard; Brunner, Stephan; Chapman, Thomas

    2017-10-01

    Interpenetrating streams of plasma found in many important scenarios in nature and in the laboratory can develop kinetic two-stream instabilities that exchange momentum and energy between the streams. A quasilinear model for the electrostatic two-stream instability is under development as a component of a multiscale model that couples fluid simulations to kinetic theory. Parameters of the model will be validated with comparison to full kinetic simulations using LOKI and efficient strategies for numerical solution of the quasilinear model and for coupling to the fluid model will be discussed. Extending the kinetic models into the collisional regime requires an efficient treatment of the collision operator. Useful reductions of the collision operator relative to the full multi-species Landau-Fokker-Plank operator are being explored. These are further motivated both by careful consideration of the parameter orderings relevant to two-stream scenarios and by the particular 2D+2V phase space used in the LOKI code. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and LDRD project 17- ERD-081.

  6. An efficient reversible privacy-preserving data mining technology over data streams.

    Science.gov (United States)

    Lin, Chen-Yi; Kao, Yuan-Hung; Lee, Wei-Bin; Chen, Rong-Chang

    2016-01-01

    With the popularity of smart handheld devices and the emergence of cloud computing, users and companies can save various data, which may contain private data, to the cloud. Topics relating to data security have therefore received much attention. This study focuses on data stream environments and uses the concept of a sliding window to design a reversible privacy-preserving technology to process continuous data in real time, known as a continuous reversible privacy-preserving (CRP) algorithm. Data with CRP algorithm protection can be accurately recovered through a data recovery process. In addition, by using an embedded watermark, the integrity of the data can be verified. The results from the experiments show that, compared to existing algorithms, CRP is better at preserving knowledge and is more effective in terms of reducing information loss and privacy disclosure risk. In addition, it takes far less time for CRP to process continuous data than existing algorithms. As a result, CRP is confirmed as suitable for data stream environments and fulfills the requirements of being lightweight and energy-efficient for smart handheld devices.

  7. STREAMFINDER II: A possible fanning structure parallel to the GD-1 stream in Pan-STARRS1

    Science.gov (United States)

    Malhan, Khyati; Ibata, Rodrigo A.; Goldman, Bertrand; Martin, Nicolas F.; Magnier, Eugene; Chambers, Kenneth

    2018-05-01

    STREAMFINDER is a new algorithm that we have built to detect stellar streams in an automated and systematic way in astrophysical datasets that possess any combination of positional and kinematic information. In Paper I, we introduced the methodology and the workings of our algorithm and showed that it is capable of detecting ultra-faint and distant halo stream structures containing as few as ˜15 members (ΣG ˜ 33.6 mag arcsec-2) in the Gaia dataset. Here, we test the method with real proper motion data from the Pan-STARRS1 survey, and by selecting targets down to r0 = 18.5 mag we show that it is able to detect the GD-1 stellar stream, whereas the structure remains below a useful detection limit when using a Matched Filter technique. The radial velocity solutions provided by STREAMFINDER for GD-1 candidate members are found to be in good agreement with observations. Furthermore, our algorithm detects a ˜ {40}° long structure approximately parallel to GD-1, and which fans out from it, possibly a sign of stream-fanning due to the triaxiality of the Galactic potential. This analysis shows the promise of this method for detecting and analysing stellar streams in the upcoming Gaia DR2 catalogue.

  8. Intelligent Packet Shaper to Avoid Network Congestion for Improved Streaming Video Quality at Clients

    DEFF Research Database (Denmark)

    Kaul, Manohar; Khosla, Rajiv; Mitsukura, Y

    2003-01-01

    of this intelligent traffic-shaping algorithm on the underlying network real time packet traffic and the eradication of unwanted abruption in the streaming video qualiy. This paper concluded from the end results of the simulation that neural networks are a very superior means of modeling real-time traffic......This paper proposes a traffic shaping algorithm based on neural networks, which adapts to a network over which streaming video is being transmitted. The purpose of this intelligent shaper is to eradicate all traffic congestion and improve the end-user's video quality. It possesses the capability...

  9. Two-Step Fair Scheduling of Continuous Media Streams over Error-Prone Wireless Channels

    Science.gov (United States)

    Oh, Soohyun; Lee, Jin Wook; Park, Taejoon; Jo, Tae-Chang

    In wireless cellular networks, streaming of continuous media (with strict QoS requirements) over wireless links is challenging due to their inherent unreliability characterized by location-dependent, bursty errors. To address this challenge, we present a two-step scheduling algorithm for a base station to provide streaming of continuous media to wireless clients over the error-prone wireless links. The proposed algorithm is capable of minimizing the packet loss rate of individual clients in the presence of error bursts, by transmitting packets in the round-robin manner and also adopting a mechanism for channel prediction and swapping.

  10. Incorporation of water-use summaries into the StreamStats web application for Maryland

    Science.gov (United States)

    Ries, Kernell G.; Horn, Marilee A.; Nardi, Mark R.; Tessler, Steven

    2010-01-01

    Approximately 25,000 new households and thousands of new jobs will be established in an area that extends from southwest to northeast of Baltimore, Maryland, as a result of the Federal Base Realignment and Closure (BRAC) process, with consequent new demands on the water resources of the area. The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, has extended the area of implementation and added functionality to an existing map-based Web application named StreamStats to provide an improved tool for planning and managing the water resources in the BRAC-affected areas. StreamStats previously was implemented for only a small area surrounding Baltimore, Maryland, and it was extended to cover all BRAC-affected areas. StreamStats could provide previously published streamflow statistics, such as the 1-percent probability flood and the 7-day, 10-year low flow, for U.S. Geological Survey data-collection stations and estimates of streamflow statistics for any user-selected point on a stream within the implemented area. The application was modified for this study to also provide summaries of water withdrawals and discharges upstream from any user-selected point on a stream. This new functionality was made possible by creating a Web service that accepts a drainage-basin delineation from StreamStats, overlays it on a spatial layer of water withdrawal and discharge points, extracts the water-use data for the identified points, and sends it back to StreamStats, where it is summarized for the user. The underlying water-use data were extracted from the U.S. Geological Survey's Site-Specific Water-Use Database System (SWUDS) and placed into a Microsoft Access database that was created for this study for easy linkage to the Web service and StreamStats. This linkage of StreamStats with water-use information from SWUDS should enable Maryland regulators and planners to make more informed decisions on the use of water resources in the BRAC area, and

  11. Extended seizure detection algorithm for intracranial EEG recordings

    DEFF Research Database (Denmark)

    Kjaer, T. W.; Remvig, L. S.; Henriksen, J.

    2010-01-01

    Objective: We implemented and tested an existing seizure detection algorithm for scalp EEG (sEEG) with the purpose of improving it to intracranial EEG (iEEG) recordings. Method: iEEG was obtained from 16 patients with focal epilepsy undergoing work up for resective epilepsy surgery. Each patient...... had 4 or 5 recorded seizures and 24 hours of non-ictal data were used for evaluation. Data from three electrodes placed at the ictal focus were used for the analysis. A wavelet based feature extraction algorithm delivered input to a support vector machine (SVM) classifier for distinction between ictal...... and non-ictal iEEG. We compare our results to a method published by Shoeb in 2004. While the original method on sEEG was optimal with the use of only four subbands in the wavelet analysis, we found that better seizure detection could be made if all subbands were used for iEEG. Results: When using...

  12. An Adaptive Algorithm for Finding Frequent Sets in Landmark Windows

    DEFF Research Database (Denmark)

    Dang, Xuan-Hong; Ong, Kok-Leong; Lee, Vincent

    2012-01-01

    We consider a CPU constrained environment for finding approximation of frequent sets in data streams using the landmark window. Our algorithm can detect overload situations, i.e., breaching the CPU capacity, and sheds data in the stream to “keep up”. This is done within a controlled error threshold...

  13. Phase recovering algorithms for extended objects encoded in digitally recorded holograms

    Directory of Open Access Journals (Sweden)

    Peng Z.

    2010-06-01

    Full Text Available The paper presents algorithms to recover the optical phase of digitally encoded holograms. Algorithms are based on the use of a numerical spherical reconstructing wave. Proof of the validity of the concept is performed through an experimental off axis digital holographic set-up. Two-color digital holographic reconstruction is also investigated. Application of the color set-up and algorithms concerns the simultaneous two-dimensional deformation measurement of an object submitted to a mechanical loading.

  14. Reclamation of potable water from mixed gas streams

    Science.gov (United States)

    Judkins, Roddie R; Bischoff, Brian L; Debusk, Melanie Moses; Narula, Chaitanya

    2013-08-20

    An apparatus for separating a liquid from a mixed gas stream can include a wall, a mixed gas stream passageway, and a liquid collection assembly. The wall can include a first surface, a second surface, and a plurality of capillary condensation pores. The capillary condensation pores extend through the wall, and have a first opening on the first surface of the wall, and a second opening on the second surface of the wall. The pore size of the pores can be between about 2 nm to about 100 nm. The mixed gas stream passageway can be in fluid communication with the first opening. The liquid collection assembly can collect liquid from the plurality of pores.

  15. Simulation and reconstruction of free-streaming data in CBM

    International Nuclear Information System (INIS)

    Friese, Volker

    2011-01-01

    The CBM experiment will investigate heavy-ion reactions at the FAIR facility at unprecedented interaction rates. This implies a novel read-out and data acquisition concept with self-triggered front-end electronics and free-streaming data. Event association must be performed in software on-line, and may require four-dimensional reconstruction routines. In order to study the problem of event association and to develop proper algorithms, simulations must be performed which go beyond the normal event-by-event processing as available from most experimental simulation frameworks. In this article, we discuss the challenges and concepts for the reconstruction of such free-streaming data and present first steps for a time-based simulation which is necessary for the development and validation of the reconstruction algorithms, and which requires modifications to the current software framework FAIRROOT as well as to the data model.

  16. Corotating pressure waves without streams in the solar wind

    International Nuclear Information System (INIS)

    Burlaga, L.F.

    1983-01-01

    Voyager 1 and 2 magnetic field and plasma data are presented which demonstrate the existence of large scale, corotating, non-linear pressure waves between 2 AU and 4 AU that are not accompanied by fast streams. The pressure waves are presumed to be generated by corotating streams near the Sun. For two of the three pressure waves that are discussed, the absence of a stream is probably a real, physical effect, viz., a consequence of deceleration of the stream by the associated compression wave. For the third pressure wave, the apparent absence of a stream may be a geometrical effect it is likely that the stream was at latitudes just above those of the spacecraft, while the associated shocks and compression wave extended over a broader range of latitudes so that they could be observed by the spacecraft. It is suggested that the development of large-scale non-linear pressure waves at the expense of the kinetic energy of streams produces a qualitative change in the solar wind in the outer heliosphere. Within a few AU the quasi-stationary solar wind structure is determined by corotating streams whose structure is determined by the boundary conditions near the Sun

  17. Extended Traffic Crash Modelling through Precision and Response Time Using Fuzzy Clustering Algorithms Compared with Multi-layer Perceptron

    Directory of Open Access Journals (Sweden)

    Iman Aghayan

    2012-11-01

    Full Text Available This paper compares two fuzzy clustering algorithms – fuzzy subtractive clustering and fuzzy C-means clustering – to a multi-layer perceptron neural network for their ability to predict the severity of crash injuries and to estimate the response time on the traffic crash data. Four clustering algorithms – hierarchical, K-means, subtractive clustering, and fuzzy C-means clustering – were used to obtain the optimum number of clusters based on the mean silhouette coefficient and R-value before applying the fuzzy clustering algorithms. The best-fit algorithms were selected according to two criteria: precision (root mean square, R-value, mean absolute errors, and sum of square error and response time (t. The highest R-value was obtained for the multi-layer perceptron (0.89, demonstrating that the multi-layer perceptron had a high precision in traffic crash prediction among the prediction models, and that it was stable even in the presence of outliers and overlapping data. Meanwhile, in comparison with other prediction models, fuzzy subtractive clustering provided the lowest value for response time (0.284 second, 9.28 times faster than the time of multi-layer perceptron, meaning that it could lead to developing an on-line system for processing data from detectors and/or a real-time traffic database. The model can be extended through improvements based on additional data through induction procedure.

  18. A New Filtering Algorithm Utilizing Radial Velocity Measurement

    Institute of Scientific and Technical Information of China (English)

    LIU Yan-feng; DU Zi-cheng; PAN Quan

    2005-01-01

    Pulse Doppler radar measurements consist of range, azimuth, elevation and radial velocity. Most of the radar tracking algorithms in engineering only utilize position measurement. The extended Kalman filter with radial velocity measureneut is presented, then a new filtering algorithm utilizing radial velocity measurement is proposed to improve tracking results and the theoretical analysis is also given. Simulation results of the new algorithm, converted measurement Kalman filter, extended Kalman filter are compared. The effectiveness of the new algorithm is verified by simulation results.

  19. Extended hierarchical search (EHS) algorithm for detection of gravitational waves from inspiralling compact binaries

    CERN Document Server

    Sengupta, A S; Lazzarini, A; Prince, T

    2002-01-01

    Pattern matching techniques such as matched filtering will be used for online extraction of gravitational wave signals buried inside detector noise. This involves cross correlating the detector output with hundreds of thousands of templates spanning a multi-dimensional parameter space, which is very expensive computationally. A faster implementation algorithm was devised by Mohanty and Dhurandhar using a hierarchy of templates over the mass parameters, which speeded up the procedure by about 25-30 times. We show that a further reduction in computational cost is possible if we extend the hierarchy paradigm to an extra parameter, namely, the time of arrival of the signal. In the first stage, the chirp waveform is cut-off at a relatively low frequency allowing the data to be coarsely sampled leading to cost saving in performing the FFTs. This is possible because most of the signal power is at low frequencies, and therefore the advantage due to hierarchy over masses is not compromised. Results are obtained for sp...

  20. Estimation of Sideslip Angle Based on Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Yupeng Huang

    2017-01-01

    Full Text Available The sideslip angle plays an extremely important role in vehicle stability control, but the sideslip angle in production car cannot be obtained from sensor directly in consideration of the cost of the sensor; it is essential to estimate the sideslip angle indirectly by means of other vehicle motion parameters; therefore, an estimation algorithm with real-time performance and accuracy is critical. Traditional estimation method based on Kalman filter algorithm is correct in vehicle linear control area; however, on low adhesion road, vehicles have obvious nonlinear characteristics. In this paper, extended Kalman filtering algorithm had been put forward in consideration of the nonlinear characteristic of the tire and was verified by the Carsim and Simulink joint simulation, such as the simulation on the wet cement road and the ice and snow road with double lane change. To test and verify the effect of extended Kalman filtering estimation algorithm, the real vehicle test was carried out on the limit test field. The experimental results show that the accuracy of vehicle sideslip angle acquired by extended Kalman filtering algorithm is obviously higher than that acquired by Kalman filtering in the area of the nonlinearity.

  1. In-camera video-stream processing for bandwidth reduction in web inspection

    Science.gov (United States)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  2. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper

    2009-01-01

    This paper presents a novel approach to fast motion detection in H.264/MPEG-4 advanced video coding (AVC) compressed video streams for IP video surveillance systems. The goal is to develop algorithms which may be useful in a real-life industrial perspective by facilitating the processing of large...... on motion vectors embedded in the video stream without requiring a full decoding and reconstruction of video frames. To improve the robustness to noise, a confidence measure based on temporal and spatial clues is introduced to increase the probability of correct detection. The algorithm was tested on indoor...

  3. Detecting fire in video stream using statistical analysis

    Directory of Open Access Journals (Sweden)

    Koplík Karel

    2017-01-01

    Full Text Available The real time fire detection in video stream is one of the most interesting problems in computer vision. In fact, in most cases it would be nice to have fire detection algorithm implemented in usual industrial cameras and/or to have possibility to replace standard industrial cameras with one implementing the fire detection algorithm. In this paper, we present new algorithm for detecting fire in video. The algorithm is based on tracking suspicious regions in time with statistical analysis of their trajectory. False alarms are minimized by combining multiple detection criteria: pixel brightness, trajectories of suspicious regions for evaluating characteristic fire flickering and persistence of alarm state in sequence of frames. The resulting implementation is fast and therefore can run on wide range of affordable hardware.

  4. 2MASS Extended Source Catalog: Overview and Algorithms

    Science.gov (United States)

    Jarrett, T.; Chester, T.; Cutri, R.; Schneider, S.; Skrutskie, M.; Huchra, J.

    1999-01-01

    The 2 Micron All-Sky Survey (2MASS)will observe over one-million galaxies and extended Galactic sources covering the entire sky at wavelenghts between 1 and 2 m. Most of these galaxies, from 70 to 80%, will be newly catalogued objetcs.

  5. A Streaming Distance Transform Algorithm for Neighborhood-Sequence Distances

    Directory of Open Access Journals (Sweden)

    Nicolas Normand

    2014-09-01

    Full Text Available We describe an algorithm that computes a “translated” 2D Neighborhood-Sequence Distance Transform (DT using a look up table approach. It requires a single raster scan of the input image and produces one line of output for every line of input. The neighborhood sequence is specified either by providing one period of some integer periodic sequence or by providing the rate of appearance of neighborhoods. The full algorithm optionally derives the regular (centered DT from the “translated” DT, providing the result image on-the-fly, with a minimal delay, before the input image is fully processed. Its efficiency can benefit all applications that use neighborhood- sequence distances, particularly when pipelined processing architectures are involved, or when the size of objects in the source image is limited.

  6. Mining Frequent Item Sets in Asynchronous Transactional Data Streams over Time Sensitive Sliding Windows Model

    International Nuclear Information System (INIS)

    Javaid, Q.; Memon, F.; Talpur, S.; Arif, M.; Awan, M.D.

    2016-01-01

    EPs (Extracting Frequent Patterns) from the continuous transactional data streams is a challenging and critical task in some of the applications, such as web mining, data analysis and retail market, prediction and network monitoring, or analysis of stock market exchange data. Many algorithms have been developed previously for mining FPs (Frequent Patterns) from a data stream. Such algorithms are currently highly required to develop new solutions and approaches to the precise handling of data streams. New techniques, solutions, or approaches are developed to address unbounded, ordered, and continuous sequences of data and for the generation of data at a rapid speed from data streams. Hence, extracting FPs using fresh or recent data involves the high-level analysis of data streams. We have suggested an efficient technique for the window sliding model; this technique extracts new and fresh FPs from high-speed data streams. In this study, a CPILT (Compacted Tree Compact Pattern Tree) is developed to capture the latest contents in the stream and to efficiently remove outdated contents from the data stream. The main concept introduced in this work on CPILT is the dynamic restructuring of a tree, which is helpful in producing a compacted tree and the frequency descending structure of a tree on runtime. With the help of the mining technique of FP growth, a complete list of new and fresh FPs is obtained from a CPILT using an existing window. The memory usage and time complexity of the latest FPs in high-speed data streams can efficiently be determined through proper experimentation and analysis. (author)

  7. Hill climbing algorithms and trivium

    DEFF Research Database (Denmark)

    Borghoff, Julia; Knudsen, Lars Ramkilde; Matusiewicz, Krystian

    2011-01-01

    This paper proposes a new method to solve certain classes of systems of multivariate equations over the binary field and its cryptanalytical applications. We show how heuristic optimization methods such as hill climbing algorithms can be relevant to solving systems of multivariate equations....... A characteristic of equation systems that may be efficiently solvable by the means of such algorithms is provided. As an example, we investigate equation systems induced by the problem of recovering the internal state of the stream cipher Trivium. We propose an improved variant of the simulated annealing method...

  8. Testing a community water supply well located near a stream for susceptibility to stream contamination and low-flows.

    Science.gov (United States)

    Stewart-Maddox, N. S.; Tysor, E. H.; Swanson, J.; Degon, A.; Howard, J.; Tsinnajinnie, L.; Frisbee, M. D.; Wilson, J. L.; Newman, B. D.

    2014-12-01

    A community well is the primary water supply to the town of El Rito. This small rural town in is located in a semi-arid, mountainous portion of northern New Mexico where water is scarce. The well is 72 meters from a nearby intermittent stream. Initial tritium sampling suggests a groundwater connection between the stream and well. The community is concerned with the sustainability and future quality of the well water. If this well is as tightly connected to the stream as the tritium data suggests, then the well is potentially at risk due to upstream contamination and the impacts of extended drought. To examine this, we observed the well over a two-week period performing pump and recovery tests, electrical resistivity surveys, and physical observations of the nearby stream. We also collected general chemistry, stable isotope and radon samples from the well and stream. Despite the large well diameter, our pump test data exhibited behavior similar to a Theis curve, but the rate of drawdown decreased below the Theis curve late in the test. This decrease suggests that the aquifer is being recharged, possibly through delayed yield, upwelling of groundwater, or from the stream. The delayed yield hypothesis is supported by our electrical resistivity surveys, which shows very little change in the saturated zone over the course of the pump test, and by low values of pump-test estimated aquifer storativity. Observations of the nearby stream showed no change in stream-water level throughout the pump test. Together this data suggests that the interaction between the stream and the well is low, but recharge could be occurring through other mechanisms such as delayed yield. Additional pump tests of longer duration are required to determine the exact nature of the aquifer and its communication with the well.

  9. Cartesian product of hypergraphs: properties and algorithms

    Directory of Open Access Journals (Sweden)

    Alain Bretto

    2009-09-01

    Full Text Available Cartesian products of graphs have been studied extensively since the 1960s. They make it possible to decrease the algorithmic complexity of problems by using the factorization of the product. Hypergraphs were introduced as a generalization of graphs and the definition of Cartesian products extends naturally to them. In this paper, we give new properties and algorithms concerning coloring aspects of Cartesian products of hypergraphs. We also extend a classical prime factorization algorithm initially designed for graphs to connected conformal hypergraphs using 2-sections of hypergraphs.

  10. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  11. rEMM: Extensible Markov Model for Data Stream Clustering in R

    Directory of Open Access Journals (Sweden)

    Michael Hahsler

    2010-10-01

    Full Text Available Clustering streams of continuously arriving data has become an important application of data mining in recent years and efficient algorithms have been proposed by several researchers. However, clustering alone neglects the fact that data in a data stream is not only characterized by the proximity of data points which is used by clustering, but also by a temporal component. The extensible Markov model (EMM adds the temporal component to data stream clustering by superimposing a dynamically adapting Markov chain. In this paper we introduce the implementation of the R extension package rEMM which implements EMM and we discuss some examples and applications.

  12. A novel image encryption algorithm based on chaos maps with Markov properties

    Science.gov (United States)

    Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang

    2015-02-01

    In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.

  13. PGG: An Online Pattern Based Approach for Stream Variation Management

    Institute of Scientific and Technical Information of China (English)

    Lu-An Tang; Bin Cui; Hong-Yan Li; Gao-Shan Miao; Dong-Qing Yang; Xin-Biao Zhou

    2008-01-01

    Many database applications require efficient processing of data streams with value variations and fiuctuant sampling frequency. The variations typically imply fundamental features of the stream and important domain knowledge of underlying objects. In some data streams, successive events seem to recur in a certain time interval, but the data indeed evolves with tiny differences as time elapses. This feature, so called pseudo periodicity, poses a new challenge to stream variation management. This study focuses on the online management for variations over such streams. The idea can be applied to many scenarios such as patient vital signal monitoring in medical applications. This paper proposes a new method named Pattern Growth Graph (PGG) to detect and manage variations over evolving streams with following features: 1) adopts the wave-pattern to capture the major information of data evolution and represent them compactly;2) detects the variations in a single pass over the stream with the help of wave-pattern matching algorithm; 3) only stores different segments of the pattern for incoming stream, and hence substantially compresses the data without losing important information; 4) distinguishes meaningful data changes from noise and reconstructs the stream with acceptable accuracy.Extensive experiments on real datasets containing millions of data items, as well as a prototype system, are carried out to demonstrate the feasibility and effectiveness of the proposed scheme.

  14. Real-time change detection in data streams with FPGAs

    International Nuclear Information System (INIS)

    Vega, J.; Dormido-Canto, S.; Cruz, T.; Ruiz, M.; Barrera, E.; Castro, R.; Murari, A.; Ochando, M.

    2014-01-01

    Highlights: • Automatic recognition of changes in data streams of multidimensional signals. • Detection algorithm based on testing exchangeability on-line. • Real-time and off-line applicability. • Real-time implementation in FPGAs. - Abstract: The automatic recognition of changes in data streams is useful in both real-time and off-line data analyses. This article shows several effective change-detecting algorithms (based on martingales) and describes their real-time applicability in the data acquisition systems through the use of Field Programmable Gate Arrays (FPGA). The automatic event recognition system is absolutely general and it does not depend on either the particular event to detect or the specific data representation (waveforms, images or multidimensional signals). The developed approach provides good results for change detection in both the temporal evolution of profiles and the two-dimensional spatial distribution of volume emission intensity. The average computation time in the FPGA is 210 μs per profile

  15. Towards Cache-Enabled, Order-Aware, Ontology-Based Stream Reasoning Framework

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Rui; Praggastis, Brenda L.; Smith, William P.; McGuinness, Deborah L.

    2016-08-16

    While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQL is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.

  16. Mapping a lateralisation gradient within the ventral stream for auditory speech perception

    Directory of Open Access Journals (Sweden)

    Karsten eSpecht

    2013-10-01

    Full Text Available Recent models on speech perception propose a dual stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend towards the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic imaging (fMRI studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesised, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a lateralisation gradient. This increasing leftward lateralisation was particularly evident for the left superior temporal sulcus (STS and more anterior parts of the temporal lobe.

  17. Mapping a lateralization gradient within the ventral stream for auditory speech perception.

    Science.gov (United States)

    Specht, Karsten

    2013-01-01

    Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a "lateralization" gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe.

  18. GPU-based streaming architectures for fast cone-beam CT image reconstruction and demons deformable registration

    International Nuclear Information System (INIS)

    Sharp, G C; Kandasamy, N; Singh, H; Folkert, M

    2007-01-01

    This paper shows how to significantly accelerate cone-beam CT reconstruction and 3D deformable image registration using the stream-processing model. We describe data-parallel designs for the Feldkamp, Davis and Kress (FDK) reconstruction algorithm, and the demons deformable registration algorithm, suitable for use on a commodity graphics processing unit. The streaming versions of these algorithms are implemented using the Brook programming environment and executed on an NVidia 8800 GPU. Performance results using CT data of a preserved swine lung indicate that the GPU-based implementations of the FDK and demons algorithms achieve a substantial speedup-up to 80 times for FDK and 70 times for demons when compared to an optimized reference implementation on a 2.8 GHz Intel processor. In addition, the accuracy of the GPU-based implementations was found to be excellent. Compared with CPU-based implementations, the RMS differences were less than 0.1 Hounsfield unit for reconstruction and less than 0.1 mm for deformable registration

  19. A simple algorithm for computing the smallest enclosing circle

    DEFF Research Database (Denmark)

    Skyum, Sven

    1991-01-01

    Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound.......Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound....

  20. Track-before-detect procedures for detection of extended object

    Science.gov (United States)

    Fan, Ling; Zhang, Xiaoling; Shi, Jun

    2011-12-01

    In this article, we present a particle filter (PF)-based track-before-detect (PF TBD) procedure for detection of extended objects whose shape is modeled by an ellipse. By incorporating of an existence variable and the target shape parameters into the state vector, the proposed algorithm performs joint estimation of the target presence/absence, trajectory and shape parameters under unknown nuisance parameters (target power and noise variance). Simulation results show that the proposed algorithm has good detection and tracking capabilities for extended objects.

  1. A Dual-Channel Acquisition Method Based on Extended Replica Folding Algorithm for Long Pseudo-Noise Code in Inter-Satellite Links.

    Science.gov (United States)

    Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen

    2018-05-25

    Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection

  2. A Dual-Channel Acquisition Method Based on Extended Replica Folding Algorithm for Long Pseudo-Noise Code in Inter-Satellite Links

    Directory of Open Access Journals (Sweden)

    Hongbo Zhao

    2018-05-01

    Full Text Available Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR, complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS and BeiDou Navigation Satellite System (BDS adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST. This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher

  3. Streams with Strahler Stream Order

    Data.gov (United States)

    Minnesota Department of Natural Resources — Stream segments with Strahler stream order values assigned. As of 01/08/08 the linework is from the DNR24K stream coverages and will not match the updated...

  4. SensorWeb 3G: Extending On-Orbit Sensor Capabilities to Enable Near Realtime User Configurability

    Science.gov (United States)

    Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Tran, Daniel; Davies, Ashley; Sullivan, Don; Ames, Troy; hide

    2010-01-01

    This research effort prototypes an implementation of a standard interface, Web Coverage Processing Service (WCPS), which is an Open Geospatial Consortium(OGC) standard, to enable users to define, test, upload and execute algorithms for on-orbit sensor systems. The user is able to customize on-orbit data products that result from raw data streaming from an instrument. This extends the SensorWeb 2.0 concept that was developed under a previous Advanced Information System Technology (AIST) effort in which web services wrap sensors and a standardized Extensible Markup Language (XML) based scripting workflow language orchestrates processing steps across multiple domains. SensorWeb 3G extends the concept by providing the user controls into the flight software modules associated with on-orbit sensor and thus provides a degree of flexibility which does not presently exist. The successful demonstrations to date will be presented, which includes a realistic HyspIRI decadal mission testbed. Furthermore, benchmarks that were run will also be presented along with future demonstration and benchmark tests planned. Finally, we conclude with implications for the future and how this concept dovetails into efforts to develop "cloud computing" methods and standards.

  5. Nanoliter-droplet acoustic streaming via ultra high frequency surface acoustic waves.

    Science.gov (United States)

    Shilton, Richie J; Travagliati, Marco; Beltram, Fabio; Cecchini, Marco

    2014-08-06

    The relevant length scales in sub-nanometer amplitude surface acoustic wave-driven acoustic streaming are demonstrated. We demonstrate the absence of any physical limitations preventing the downscaling of SAW-driven internal streaming to nanoliter microreactors and beyond by extending SAW microfluidics up to operating frequencies in the GHz range. This method is applied to nanoliter scale fluid mixing. © 2014 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Performance improvement of multi-class detection using greedy algorithm for Viola-Jones cascade selection

    Science.gov (United States)

    Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.

    2018-04-01

    This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.

  7. Tracking Gendered Streams

    Directory of Open Access Journals (Sweden)

    Maria Eriksson

    2017-10-01

    Full Text Available One of the most prominent features of digital music services is the provision of personalized music recommendations that come about through the profiling of users and audiences. Based on a range of "bot experiments," this article investigates if, and how, gendered patterns in music recommendations are provided by the streaming service Spotify. While our experiments did not give any strong indications that Spotify assigns different taste profiles to male and female users, the study showed that male artists were highly overrepresented in Spotify's music recommendations; an issue which we argue prompts users to cite hegemonic masculine norms within the music industries. Although the results should be approached as historically and contextually contingent, we argue that they point to how gender and gendered tastes may be constituted through the interplay between users and algorithmic knowledge-making processes, and how digital content delivery may maintain and challenge gender relations and gendered power differentials within the music industries. Seen through the lens of critical research on software, music and gender performativity, the experiments thus provide insights into how gender is shaped and attributed meaning as it materializes in contemporary music streams.

  8. Track-before-detect procedures for detection of extended object

    Directory of Open Access Journals (Sweden)

    Fan Ling

    2011-01-01

    Full Text Available Abstract In this article, we present a particle filter (PF-based track-before-detect (PF TBD procedure for detection of extended objects whose shape is modeled by an ellipse. By incorporating of an existence variable and the target shape parameters into the state vector, the proposed algorithm performs joint estimation of the target presence/absence, trajectory and shape parameters under unknown nuisance parameters (target power and noise variance. Simulation results show that the proposed algorithm has good detection and tracking capabilities for extended objects.

  9. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  10. A Comparative Study of Frequent and Maximal Periodic Pattern Mining Algorithms in Spatiotemporal Databases

    Science.gov (United States)

    Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.

    2017-08-01

    Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.

  11. SHARPEN-Systematic Hierarchical Algorithms for Rotamers and Proteins on an Extended Network

    KAUST Repository

    Loksha, Ilya V.

    2009-04-30

    Algorithms for discrete optimization of proteins play a central role in recent advances in protein structure prediction and design. We wish to improve the resources available for computational biologists to rapidly prototype such algorithms and to easily scale these algorithms to many processors. To that end, we describe the implementation and use of two new open source resources, citing potential benefits over existing software. We discuss CHOMP, a new object-oriented library for macromolecular optimization, and SHARPEN, a framework for scaling CHOMP scripts to many computers. These tools allow users to develop new algorithms for a variety of applications including protein repacking, protein-protein docking, loop rebuilding, or homology model remediation. Particular care was taken to allow modular energy function design; protein conformations may currently be scored using either the OPLSaa molecular mechanical energy function or an all-atom semiempirical energy function employed by Rosetta. © 2009 Wiley Periodicals, Inc.

  12. Influence of the Gulf Stream on the troposphere.

    Science.gov (United States)

    Minobe, Shoshiro; Kuwano-Yoshida, Akira; Komori, Nobumasa; Xie, Shang-Ping; Small, Richard Justin

    2008-03-13

    The Gulf Stream transports large amounts of heat from the tropics to middle and high latitudes, and thereby affects weather phenomena such as cyclogenesis and low cloud formation. But its climatic influence, on monthly and longer timescales, remains poorly understood. In particular, it is unclear how the warm current affects the free atmosphere above the marine atmospheric boundary layer. Here we consider the Gulf Stream's influence on the troposphere, using a combination of operational weather analyses, satellite observations and an atmospheric general circulation model. Our results reveal that the Gulf Stream affects the entire troposphere. In the marine boundary layer, atmospheric pressure adjustments to sharp sea surface temperature gradients lead to surface wind convergence, which anchors a narrow band of precipitation along the Gulf Stream. In this rain band, upward motion and cloud formation extend into the upper troposphere, as corroborated by the frequent occurrence of very low cloud-top temperatures. These mechanisms provide a pathway by which the Gulf Stream can affect the atmosphere locally, and possibly also in remote regions by forcing planetary waves. The identification of this pathway may have implications for our understanding of the processes involved in climate change, because the Gulf Stream is the upper limb of the Atlantic meridional overturning circulation, which has varied in strength in the past and is predicted to weaken in response to human-induced global warming in the future.

  13. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  14. Graph Algorithm Animation with Grrr

    OpenAIRE

    Rodgers, Peter; Vidal, Natalia

    2000-01-01

    We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...

  15. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    Science.gov (United States)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  16. An Implementation of RC4+ Algorithm and Zig-zag Algorithm in a Super Encryption Scheme for Text Security

    Science.gov (United States)

    Budiman, M. A.; Amalia; Chayanie, N. I.

    2018-03-01

    Cryptography is the art and science of using mathematical methods to preserve message security. There are two types of cryptography, namely classical and modern cryptography. Nowadays, most people would rather use modern cryptography than classical cryptography because it is harder to break than the classical one. One of classical algorithm is the Zig-zag algorithm that uses the transposition technique: the original message is unreadable unless the person has the key to decrypt the message. To improve the security, the Zig-zag Cipher is combined with RC4+ Cipher which is one of the symmetric key algorithms in the form of stream cipher. The two algorithms are combined to make a super-encryption. By combining these two algorithms, the message will be harder to break by a cryptanalyst. The result showed that complexity of the combined algorithm is θ(n2 ), while the complexity of Zig-zag Cipher and RC4+ Cipher are θ(n2 ) and θ(n), respectively.

  17. Cross-Layer Design of Source Rate Control and Congestion Control for Wireless Video Streaming

    Directory of Open Access Journals (Sweden)

    Peng Zhu

    2007-01-01

    Full Text Available Cross-layer design has been used in streaming video over the wireless channels to optimize the overall system performance. In this paper, we extend our previous work on joint design of source rate control and congestion control for video streaming over the wired channel, and propose a cross-layer design approach for wireless video streaming. First, we extend the QoS-aware congestion control mechanism (TFRCC proposed in our previous work to the wireless scenario, and provide a detailed discussion about how to enhance the overall performance in terms of rate smoothness and responsiveness of the transport protocol. Then, we extend our previous joint design work to the wireless scenario, and a thorough performance evaluation is conducted to investigate its performance. Simulation results show that by cross-layer design of source rate control at application layer and congestion control at transport layer, and by taking advantage of the MAC layer information, our approach can avoid the throughput degradation caused by wireless link error, and better support the QoS requirements of the application. Thus, the playback quality is significantly improved, while good performance of the transport protocol is still preserved.

  18. Integrated WiFi/PDR/Smartphone Using an Adaptive System Noise Extended Kalman Filter Algorithm for Indoor Localization

    Directory of Open Access Journals (Sweden)

    Xin Li

    2016-02-01

    Full Text Available Wireless signal strength is susceptible to the phenomena of interference, jumping, and instability, which often appear in the positioning results based on Wi-Fi field strength fingerprint database technology for indoor positioning. Therefore, a Wi-Fi and PDR (pedestrian dead reckoning real-time fusion scheme is proposed in this paper to perform fusing calculation by adaptively determining the dynamic noise of a filtering system according to pedestrian movement (straight or turning, which can effectively restrain the jumping or accumulation phenomena of wireless positioning and the PDR error accumulation problem. Wi-Fi fingerprint matching typically requires a quite high computational burden: To reduce the computational complexity of this step, the affinity propagation clustering algorithm is adopted to cluster the fingerprint database and integrate the information of the position domain and signal domain of respective points. An experiment performed in a fourth-floor corridor at the School of Environment and Spatial Informatics, China University of Mining and Technology, shows that the traverse points of the clustered positioning system decrease by 65%–80%, which greatly improves the time efficiency. In terms of positioning accuracy, the average error is 4.09 m through the Wi-Fi positioning method. However, the positioning error can be reduced to 2.32 m after integration of the PDR algorithm with the adaptive noise extended Kalman filter (EKF.

  19. StreamStats in Oklahoma - Drainage-Basin Characteristics and Peak-Flow Frequency Statistics for Ungaged Streams

    Science.gov (United States)

    Smith, S. Jerrod; Esralew, Rachel A.

    2010-01-01

    drainage-basin outlet for the period 1961-1990, 10-85 channel slope (slope between points located at 10 percent and 85 percent of the longest flow-path length upstream from the outlet), and percent impervious area. The Oklahoma StreamStats application interacts with the National Streamflow Statistics database, which contains the peak-flow regression equations in a previously published report. Fourteen peak-flow (flood) frequency statistics are available for computation in the Oklahoma StreamStats application. These statistics include the peak flow at 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals for rural, unregulated streams; and the peak flow at 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals for rural streams that are regulated by Natural Resources Conservation Service floodwater retarding structures. Basin characteristics and streamflow statistics cannot be computed for locations in playa basins (mostly in the Oklahoma Panhandle) and along main stems of the largest river systems in the state, namely the Arkansas, Canadian, Cimarron, Neosho, Red, and Verdigris Rivers, because parts of the drainage areas extend outside of the processed hydrologic units.

  20. Optimizing Reservoir-Stream-Aquifer Interactions for Conjunctive Use and Hydropower Production

    Directory of Open Access Journals (Sweden)

    Hala Fayad

    2012-01-01

    Full Text Available Conjunctive management of water resources involves coordinating use of surface water and groundwater resources. Very few simulation/optimization (S-O models for stream-aquifer system management have included detailed interactions between groundwater, streams, and reservoir storage. This paper presents an S-O model doing that via artificial neural network simulators and genetic algorithm optimizer for multiobjective conjunctive water use problems. The model simultaneously addresses all significant flows including reservoir-stream-diversion-aquifer interactions in a more detailed manner than previous models. The model simultaneously maximizes total water provided and hydropower production. A penalty function implicitly poses constraints on state variables. The model effectively finds feasible optimal solutions and the Pareto optimum. Illustrated is application for planning water resource and minihydropower system development.

  1. The HSBQ Algorithm with Triple-play Services for Broadband Hybrid Satellite Constellation Communication System

    Directory of Open Access Journals (Sweden)

    Anupon Boriboon

    2016-07-01

    Full Text Available The HSBQ algorithm is the one of active queue management algorithms, which orders to avoid high packet loss rates and control stable stream queue. That is the problem of calculation of the drop probability for both queue length stability and bandwidth fairness. This paper proposes the HSBQ, which drop the packets before the queues overflow at the gateways, so that the end nodes can respond to the congestion before queue overflow. This algorithm uses the change of the average queue length to adjust the amount by which the mark (or drop probability is changed. Moreover it adjusts the queue weight, which is used to estimate the average queue length, based on the rate. The results show that HSBQ algorithm could maintain control stable stream queue better than group of congestion metric without flow information algorithm as the rate of hybrid satellite network changing dramatically, as well as the presented empiric evidences demonstrate that the use of HSBQ algorithm offers a better quality of service than the traditionally queue control mechanisms used in hybrid satellite network.

  2. Two-stream Convolutional Neural Network for Methane Emissions Quantification

    Science.gov (United States)

    Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.

    2017-12-01

    Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair

  3. Analysis of sound data streamed over the network

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2013-01-01

    Full Text Available In this paper we inspect a difference between original sound recording and signal captured after streaming this original recording over a network loaded with a heavy traffic. There are several kinds of failures occurring in the captured recording caused by network congestion. We try to find a method how to evaluate correctness of streamed audio. Usually there are metrics based on a human perception of a signal such as “signal is clear, without audible failures”, “signal is having some failures but it is understandable”, or “signal is inarticulate”. These approaches need to be statistically evaluated on a broad set of respondents, which is time and resource consuming. We try to propose some metrics based on signal properties allowing us to compare the original and captured recording. We use algorithm called Dynamic Time Warping (Müller, 2007 commonly used for time series comparison in this paper. Some other time series exploration approaches can be found in (Fejfar, 2011 and (Fejfar, 2012. The data was acquired in our network laboratory simulating network traffic by downloading files, streaming audio and video simultaneously. Our former experiment inspected Quality of Service (QoS and its impact on failures of received audio data stream. This experiment is focused on the comparison of sound recordings rather than network mechanism.We focus, in this paper, on a real time audio stream such as a telephone call, where it is not possible to stream audio in advance to a “pool”. Instead it is necessary to achieve as small delay as possible (between speaker voice recording and listener voice replay. We are using RTP protocol for streaming audio.

  4. System architecture for ubiquitous live video streaming in university network environment

    CSIR Research Space (South Africa)

    Dludla, AG

    2013-09-01

    Full Text Available an architecture which supports ubiquitous live streaming for university or campus networks using a modified bluetooth inquiry mechanism with extended ID, integrated end-user device usage and adaptation to heterogeneous networks. Riding on that architecture...

  5. Learning Extended Finite State Machines

    Science.gov (United States)

    Cassel, Sofia; Howar, Falk; Jonsson, Bengt; Steffen, Bernhard

    2014-01-01

    We present an active learning algorithm for inferring extended finite state machines (EFSM)s, combining data flow and control behavior. Key to our learning technique is a novel learning model based on so-called tree queries. The learning algorithm uses the tree queries to infer symbolic data constraints on parameters, e.g., sequence numbers, time stamps, identifiers, or even simple arithmetic. We describe sufficient conditions for the properties that the symbolic constraints provided by a tree query in general must have to be usable in our learning model. We have evaluated our algorithm in a black-box scenario, where tree queries are realized through (black-box) testing. Our case studies include connection establishment in TCP and a priority queue from the Java Class Library.

  6. Fluid-structure-coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D

  7. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  8. Selective particle and cell capture in a continuous flow using micro-vortex acoustic streaming.

    Science.gov (United States)

    Collins, David J; Khoo, Bee Luan; Ma, Zhichao; Winkler, Andreas; Weser, Robert; Schmidt, Hagen; Han, Jongyoon; Ai, Ye

    2017-05-16

    Acoustic streaming has emerged as a promising technique for refined microscale manipulation, where strong rotational flow can give rise to particle and cell capture. In contrast to hydrodynamically generated vortices, acoustic streaming is rapidly tunable, highly scalable and requires no external pressure source. Though streaming is typically ignored or minimized in most acoustofluidic systems that utilize other acoustofluidic effects, we maximize the effect of acoustic streaming in a continuous flow using a high-frequency (381 MHz), narrow-beam focused surface acoustic wave. This results in rapid fluid streaming, with velocities orders of magnitude greater than that of the lateral flow, to generate fluid vortices that extend the entire width of a 400 μm wide microfluidic channel. We characterize the forces relevant for vortex formation in a combined streaming/lateral flow system, and use these acoustic streaming vortices to selectively capture 2 μm from a mixed suspension with 1 μm particles and human breast adenocarcinoma cells (MDA-231) from red blood cells.

  9. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    Science.gov (United States)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  10. Land cover change detection using the internal covariance matrix of the extended kalman filter over multiple spectral bands

    CSIR Research Space (South Africa)

    Salmon

    2013-06-01

    Full Text Available stream_source_info Salmon_10577_2013.pdf.txt stream_content_type text/plain stream_size 1183 Content-Encoding ISO-8859-1 stream_name Salmon_10577_2013.pdf.txt Content-Type text/plain; charset=ISO-8859-1 IEEE Journal... of Selected Topics in Applied Earth Observations and Remote Sensing, vol, 6(3): 1079- 1085 Land cover change detection using the internal covariance matrix of the extended kalman filter over multiple spectral bands Salmon BP Kleynhans W Van den Bergh...

  11. Methylmercury bioaccumulation in stream food webs declines with increasing primary production

    Science.gov (United States)

    Walters, David; D.F. Raikow,; C.R. Hammerschmidt,; M.G. Mehling,; A. Kovach,; J.T. Oris,

    2015-01-01

    Opposing hypotheses posit that increasing primary productivity should result in either greater or lesser contaminant accumulation in stream food webs. We conducted an experiment to evaluate primary productivity effects on MeHg accumulation in stream consumers. We varied light for 16 artificial streams creating a productivity gradient (oxygen production =0.048–0.71 mg O2 L–1 d–1) among streams. Two-level food webs were established consisting of phytoplankton/filter feeding clam, periphyton/grazing snail, and leaves/shredding amphipod (Hyalella azteca). Phytoplankton and periphyton biomass, along with MeHg removal from the water column, increased significantly with productivity, but MeHg concentrations in these primary producers declined. Methylmercury concentrations in clams and snails also declined with productivity, and consumer concentrations were strongly correlated with MeHg concentrations in primary producers. Heterotroph biomass on leaves, MeHg in leaves, and MeHg in Hyalella were unrelated to stream productivity. Our results support the hypothesis that contaminant bioaccumulation declines with stream primary production via the mechanism of bloom dilution (MeHg burden per cell decreases in algal blooms), extending patterns of contaminant accumulation documented in lakes to lotic systems.

  12. Innovations in lattice QCD algorithms

    International Nuclear Information System (INIS)

    Orginos, Konstantinos

    2006-01-01

    Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today

  13. An extended k-means technique for clustering moving objects

    Directory of Open Access Journals (Sweden)

    Omnia Ossama

    2011-03-01

    Full Text Available k-means algorithm is one of the basic clustering techniques that is used in many data mining applications. In this paper we present a novel pattern based clustering algorithm that extends the k-means algorithm for clustering moving object trajectory data. The proposed algorithm uses a key feature of moving object trajectories namely, its direction as a heuristic to determine the different number of clusters for the k-means algorithm. In addition, we use the silhouette coefficient as a measure for the quality of our proposed approach. Finally, we present experimental results on both real and synthetic data that show the performance and accuracy of our proposed technique.

  14. Using Video to Communicate Scientific Findings -- Habitat Connections in Urban Streams

    Science.gov (United States)

    Harned, D. A.; Moorman, M.; Fitzpatrick, F. A.; McMahon, G.

    2011-12-01

    The U.S Geological Survey (USGS) National Water-Quality Assessment Program (NAWQA) provides information about (1) water-quality conditions and how those conditions vary locally, regionally, and nationally, (2) water-quality trends, and (3) factors that affect those conditions. As part of the NAWQA Program, the Effects of Urbanization on Stream Ecosystems (EUSE) study examined the vulnerability and resilience of streams to urbanization. Completion of the EUSE study has resulted in over 20 scientific publications. Video podcasts are being used in addition to these publications to communicate the relevance of these scientific findings to more general audiences such as resource managers, educational groups, public officials, and the general public. An example of one of the podcasts is a film examining effects of urbanization on stream habitat. "Habitat Connections in Urban Streams" explores how urbanization changes some of the physical features that provide in-stream habitat and examines examples of stream restoration projects designed to improve stream form and function. The "connections" theme is emphasized, including the connection of in-stream habitats from the headwaters to the stream mouth; connections between stream habitat and the surrounding floodplains, wetlands and basin; and connections between streams and people-- resource managers, public officials, scientists, and the general public. Examples of innovative stream restoration projects in Baltimore Maryland; Milwaukee, Wisconsin; and Portland Oregon are shown with interviews of managers, engineers, scientists, and others describing the projects. The film is combined with a website with links to extended film versions of the stream-restoration project interviews. The website and films are an example of USGS efforts aimed at improving science communication to a general audience. The film is available for access from the EUSE website: http://water.usgs.gov/nawqa/urban/html/podcasts.html. Additional films are

  15. Battling memory requirements of array programming through streaming

    DEFF Research Database (Denmark)

    Kristensen, Mads Ruben Burgdorff; Avery, James Emil; Blum, Troels

    2016-01-01

    A barrier to efficient array programming, for example in Python/NumPy, is that algorithms written as pure array operations completely without loops, while most efficient on small input, can lead to explosions in memory use. The present paper presents a solution to this problem using array streaming......, implemented in the automatic parallelization high-performance framework Bohrium. This makes it possible to use array programming in Python/NumPy code directly, even when the apparent memory requirement exceeds the machine capacity, since the automatic streaming eliminates the temporary memory overhead...... by performing calculations in per-thread registers. Using Bohrium, we automatically fuse, JIT-compile, and execute NumPy array operations on GPGPUs without modification to the user programs. We present performance evaluations of three benchmarks, all of which show dramatic reductions in memory use from...

  16. FPGA helix tracking algorithm for PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Yutie; Galuska, Martin; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Muenchow, David [II. Physikalisches Institut, University of Giessen (Germany); Ye, Hua [Institute of High Energy Physics, CAS (China); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA detector is a general-purpose detector for physics with high luminosity cooled antiproton beams, planed to operate at the FAIR facility in Darmstadt, Germany. The central detector includes a silicon Micro Vertex Detector (MVD) and a Straw Tube Tracker (STT). Without any hardware trigger, large amounts of raw data are streaming into the data acquisition system. The data reduction task is performed in the online system by reconstruction algorithms programmed on FPGAs (Field Programmable Gate Arrays) as first level and on a farm of GPUs or PCs as a second level. One important part in the system is the online track reconstruction. In this presentation, an online tracking algorithm for helix tracking reconstruction in the solenoidal field is shown. The VHDL-based algorithm is tested with different types of events, at different event rate. Furthermore, a study of T0 extraction from the tracking algorithm is performed. A concept of simultaneous tracking and T0 determination is presented.

  17. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  18. A Distributed Dynamic Super Peer Selection Method Based on Evolutionary Game for Heterogeneous P2P Streaming Systems

    Directory of Open Access Journals (Sweden)

    Jing Chen

    2013-01-01

    Full Text Available Due to high efficiency and good scalability, hierarchical hybrid P2P architecture has drawn more and more attention in P2P streaming research and application fields recently. The problem about super peer selection, which is the key problem in hybrid heterogeneous P2P architecture, is becoming highly challenging because super peers must be selected from a huge and dynamically changing network. A distributed super peer selection (SPS algorithm for hybrid heterogeneous P2P streaming system based on evolutionary game is proposed in this paper. The super peer selection procedure is modeled based on evolutionary game framework firstly, and its evolutionarily stable strategies are analyzed. Then a distributed Q-learning algorithm (ESS-SPS according to the mixed strategies by analysis is proposed for the peers to converge to the ESSs based on its own payoff history. Compared to the traditional randomly super peer selection scheme, experiments results show that the proposed ESS-SPS algorithm achieves better performance in terms of social welfare and average upload rate of super peers and keeps the upload capacity of the P2P streaming system increasing steadily with the number of peers increasing.

  19. THE ORBIT OF THE ORPHAN STREAM

    International Nuclear Information System (INIS)

    Newberg, Heidi Jo; Willett, Benjamin A.; Yanny, Brian; Xu Yan

    2010-01-01

    halo masses. Distinguishing between different classes of models requires data over a larger range of distances. The Orphan Stream is projected to extend to 90 kpc from the Galactic center, and measurements of these distant parts of the stream would be a powerful probe of the mass of the Milky Way.

  20. A Clustal Alignment Improver Using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Rene; Fogel, Gary B.; Krink, Thimo

    2002-01-01

    Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...

  1. Lifelong Augmentation of Multimodal Streaming Autobiographical Memories

    OpenAIRE

    Petit, Maxime; Fischer, Tobias; Demiris, Yiannis

    2016-01-01

    Robot systems that interact with humans over extended periods of time will benefit from storing and recalling large amounts of accumulated sensorimotor and interaction data. We provide a principled framework for the cumulative organisation of streaming autobiographical data so that data can be continuously processed and augmented as the processing and reasoning abilities of the agent develop and further interactions with humans take place. As an example, we show how a kinematic structure lear...

  2. Extended Target Recognition in Cognitive Radar Networks

    Directory of Open Access Journals (Sweden)

    Xiqin Wang

    2010-11-01

    Full Text Available We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR based sequential hypothesis testing (SHT framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS. Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.

  3. A theoretical derivation of the condensed history algorithm

    International Nuclear Information System (INIS)

    Larsen, E.W.

    1992-01-01

    Although the Condensed History Algorithm is a successful and widely-used Monte Carlo method for solving electron transport problems, it has been derived only by an ad-hoc process based on physical reasoning. In this paper we show that the Condensed History Algorithm can be justified as a Monte Carlo simulation of an operator-split procedure in which the streaming, angular scattering, and slowing-down operators are separated within each time step. Different versions of the operator-split procedure lead to Ο(Δs) and Ο(Δs 2 ) versions of the method, where Δs is the path-length step. Our derivation also indicates that higher-order versions of the Condensed History Algorithm may be developed. (Author)

  4. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  5. Progressive Conversion from B-rep to BSP for Streaming Geometric Modeling.

    Science.gov (United States)

    Bajaj, Chandrajit; Paoluzzi, Alberto; Scorzelli, Giorgio

    2006-01-01

    We introduce a novel progressive approach to generate a Binary Space Partition (BSP) tree and a convex cell decomposition for any input triangles boundary representation (B-rep), by utilizing a fast calculation of the surface inertia. We also generate a solid model at progressive levels of detail. This approach relies on a variation of standard BSP tree generation, allowing for labeling cells as in, out and fuzzy, and which permits a comprehensive representation of a solid as the Hasse diagram of a cell complex. Our new algorithm is embedded in a streaming computational framework, using four types of dataflow processes that continuously produce, transform, combine or consume subsets of cells depending on their number or input/output stream. A varied collection of geometric modeling techniques are integrated in this streaming framework, including polygonal, spline, solid and heterogeneous modeling with boundary and decompositive representations, Boolean set operations, Cartesian products and adaptive refinement. The real-time B-rep to BSP streaming results we report in this paper are a large step forward in the ultimate unification of rapid conceptual and detailed shape design methodologies.

  6. On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach

    Science.gov (United States)

    Liu, Zheng; Xue, Kaiping; Hong, Peilin

    The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.

  7. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    Science.gov (United States)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  8. Multiple Kernel Learning for Heterogeneous Anomaly Detection: Algorithm and Aviation Safety Case Study

    Science.gov (United States)

    Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.

    2010-01-01

    The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods

  9. Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems

    National Research Council Canada - National Science Library

    Abramson, Mark A; Audet, Charles; Dennis, Jr, J. E

    2004-01-01

    .... This class combines and extends the Audet-Dennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPS-filter algorithms for general nonlinear constraints...

  10. A secure transmission scheme of streaming media based on the encrypted control message

    Science.gov (United States)

    Li, Bing; Jin, Zhigang; Shu, Yantai; Yu, Li

    2007-09-01

    As the use of streaming media applications increased dramatically in recent years, streaming media security becomes an important presumption, protecting the privacy. This paper proposes a new encryption scheme in view of characteristics of streaming media and the disadvantage of the living method: encrypt the control message in the streaming media with the high security lever and permute and confuse the data which is non control message according to the corresponding control message. Here the so-called control message refers to the key data of the streaming media, including the streaming media header and the header of the video frame, and the seed key. We encrypt the control message using the public key encryption algorithm which can provide high security lever, such as RSA. At the same time we make use of the seed key to generate key stream, from which the permutation list P responding to GOP (group of picture) is derived. The plain text of the non-control message XORs the key stream and gets the middle cipher text. And then obtained one is permutated according to P. In contrast the decryption process is the inverse process of the above. We have set up a testbed for the above scheme and found our scheme is six to eight times faster than the conventional method. It can be applied not only between PCs but also between handheld devices.

  11. Applicability of genetic algorithms to parameter estimation of economic models

    Directory of Open Access Journals (Sweden)

    Marcel Ševela

    2004-01-01

    Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong.

  12. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  13. SU-G-TeP2-09: Evaluation of the MaxFOV Extended Field of View (EFOV) Reconstruction Algorithm On a GE RT590 CT Scanner

    International Nuclear Information System (INIS)

    Grzetic, S; Weldon, M; Noa, K; Gupta, N

    2016-01-01

    Purpose: This study compares the newly released MaxFOV Revision 1 EFOV reconstruction algorithm for GE RT590 to the older WideView EFOV algorithm. Two radiotherapy overlays from Q-fix and Diacor, are included in our analysis. Hounsfield Units (HU) generated with the WideView algorithm varied in the extended field (beyond 50cm) and the scanned object’s border varied from slice to slice. A validation of HU consistency between the two reconstruction algorithms is performed. Methods: A CatPhan 504 and CIRS062 Electron Density Phantom were scanned on a GE RT590 CT-Simulator. The phantoms were positioned in multiple locations within the scan field of view so some of the density plugs were outside the 50cm reconstruction circle. Images were reconstructed using both the WideView and MaxFOV algorithms. The HU for each scan were characterized both in average over a volume and in profile. Results: HU values are consistent between the two algorithms. Low-density material will have a slight increase in HU value and high-density material will have a slight decrease in HU value as the distance from the sweet spot increases. Border inconsistencies and shading artifacts are still present with the MaxFOV reconstruction on the Q-fix overlay but not the Diacor overlay (It should be noted that the Q-fix overlay is not currently GE-certified). HU values for water outside the 50cm FOV are within 40HU of reconstructions at the sweet spot of the scanner. CatPhan HU profiles show improvement with the MaxFOV algorithm as it approaches the scanner edge. Conclusion: The new MaxFOV algorithm improves the contour border for objects outside of the standard FOV when using a GE-approved tabletop. Air cavities outside of the standard FOV create inconsistent object borders. HU consistency is within GE specifications and the accuracy of the phantom edge improves. Further adjustments to the algorithm are being investigated by GE.

  14. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  15. Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks

    Science.gov (United States)

    Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire

    2009-01-01

    This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145

  16. The Stream-Catchment (StreamCat) and Lake-Catchment ...

    Science.gov (United States)

    Background/Question/MethodsLake and stream conditions respond to both natural and human-related landscape features. Characterizing these features within contributing areas (i.e., delineated watersheds) of streams and lakes could improve our understanding of how biological conditions vary spatially and improve the use, management, and restoration of these aquatic resources. However, the specialized geospatial techniques required to define and characterize stream and lake watersheds has limited their widespread use in both scientific and management efforts at large spatial scales. We developed the StreamCat and LakeCat Datasets to model, predict, and map the probable biological conditions of streams and lakes across the conterminous US (CONUS). Both StreamCat and LakeCat contain watershed-level characterizations of several hundred natural (e.g., soils, geology, climate, and land cover) and anthropogenic (e.g., urbanization, agriculture, mining, and forest management) landscape features for ca. 2.6 million stream segments and 376,000 lakes across the CONUS, respectively. These datasets can be paired with field samples to provide independent variables for modeling and other analyses. We paired 1,380 stream and 1,073 lake samples from the USEPAs National Aquatic Resource Surveys with StreamCat and LakeCat and used random forest (RF) to model and then map an invertebrate condition index and chlorophyll a concentration, respectively. Results/ConclusionsThe invertebrate

  17. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James W. [Univ. of California, Berkeley, CA (United States)

    2017-09-14

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emerging memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a

  18. Properties of a genetic algorithm extended by a random self-learning operator and asymmetric mutations: A convergence study for a task of powder-pattern indexing

    International Nuclear Information System (INIS)

    Paszkowicz, Wojciech

    2006-01-01

    Genetic algorithms represent a powerful global-optimisation tool applicable in solving tasks of high complexity in science, technology, medicine, communication, etc. The usual genetic-algorithm calculation scheme is extended here by introduction of a quadratic self-learning operator, which performs a partial local search for randomly selected representatives of the population. This operator is aimed as a minor deterministic contribution to the (stochastic) genetic search. The population representing the trial solutions is split into two equal subpopulations allowed to exhibit different mutation rates (so called asymmetric mutation). The convergence is studied in detail exploiting a crystallographic-test example of indexing of powder diffraction data of orthorhombic lithium copper oxide, varying such parameters as mutation rates and the learning rate. It is shown through the averaged (over the subpopulation) fitness behaviour, how the genetic diversity in the population depends on the mutation rate of the given subpopulation. Conditions and algorithm parameter values favourable for convergence in the framework of proposed approach are discussed using the results for the mentioned example. Further data are studied with a somewhat modified algorithm using periodically varying mutation rates and a problem-specific operator. The chance of finding the global optimum and the convergence speed are observed to be strongly influenced by the effective mutation level and on the self-learning level. The optimal values of these two parameters are about 6 and 5%, respectively. The periodic changes of mutation rate are found to improve the explorative abilities of the algorithm. The results of the study confirm that the applied methodology leads to improvement of the classical genetic algorithm and, therefore, it is expected to be helpful in constructing of algorithms permitting to solve similar tasks of higher complexity

  19. A Method for Calculating the Mean Orbits of Meteor Streams

    Science.gov (United States)

    Voloshchuk, Yu. I.; Kashcheev, B. L.

    An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.

  20. Streaming Pool: reuse, combine and create reactive streams with pleasure

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    When connecting together heterogeneous and complex systems, it is not easy to exchange data between components. Streams of data are successfully used in industry in order to overcome this problem, especially in the case of "live" data. Streams are a specialization of the Observer design pattern and they provide asynchronous and non-blocking data flow. The ongoing effort of the ReactiveX initiative is one example that demonstrates how demanding this technology is even for big companies. Bridging the discrepancies of different technologies with common interfaces is already done by the Reactive Streams initiative and, in the JVM world, via reactive-streams-jvm interfaces. Streaming Pool is a framework for providing and discovering reactive streams. Through the mechanism of dependency injection provided by the Spring Framework, Streaming Pool provides a so called Discovery Service. This object can discover and chain streams of data that are technologically agnostic, through the use of Stream IDs. The stream to ...

  1. Interaction between stream temperature, streamflow, and groundwater exchanges in alpine streams

    Science.gov (United States)

    Constantz, James E.

    1998-01-01

    Four alpine streams were monitored to continuously collect stream temperature and streamflow for periods ranging from a week to a year. In a small stream in the Colorado Rockies, diurnal variations in both stream temperature and streamflow were significantly greater in losing reaches than in gaining reaches, with minimum streamflow losses occurring early in the day and maximum losses occurring early in the evening. Using measured stream temperature changes, diurnal streambed infiltration rates were predicted to increase as much as 35% during the day (based on a heat and water transport groundwater model), while the measured increase in streamflow loss was 40%. For two large streams in the Sierra Nevada Mountains, annual stream temperature variations ranged from 0° to 25°C. In summer months, diurnal stream temperature variations were 30–40% of annual stream temperature variations, owing to reduced streamflows and increased atmospheric heating. Previous reports document that one Sierra stream site generally gains groundwater during low flows, while the second Sierra stream site may lose water during low flows. For August the diurnal streamflow variation was 11% at the gaining stream site and 30% at the losing stream site. On the basis of measured diurnal stream temperature variations, streambed infiltration rates were predicted to vary diurnally as much as 20% at the losing stream site. Analysis of results suggests that evapotranspiration losses determined diurnal streamflow variations in the gaining reaches, while in the losing reaches, evapotranspiration losses were compounded by diurnal variations in streambed infiltration. Diurnal variations in stream temperature were reduced in the gaining reaches as a result of discharging groundwater of relatively constant temperature. For the Sierra sites, comparison of results with those from a small tributary demonstrated that stream temperature patterns were useful in delineating discharges of bank storage following

  2. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  3. Extended superposed quantum-state initialization using disjoint prime implicants

    International Nuclear Information System (INIS)

    Rosenbaum, David; Perkowski, Marek

    2009-01-01

    Extended superposed quantum-state initialization using disjoint prime implicants is an algorithm for generating quantum arrays for the purpose of initializing a desired quantum superposition. The quantum arrays generated by this algorithm almost always use fewer gates than other algorithms and in the worst case use the same number of gates. These improvements are achieved by allowing certain parts of the quantum superposition that cannot be initialized directly by the algorithm to be initialized using special circuits. This allows more terms in the quantum superposition to be initialized at the same time which decreases the number of gates required by the generated quantum array.

  4. Models of Tidally Induced Gas Filaments in the Magellanic Stream

    Science.gov (United States)

    Pardy, Stephen A.; D’Onghia, Elena; Fox, Andrew J.

    2018-04-01

    The Magellanic Stream and Leading Arm of H I that stretches from the Large and Small Magellanic Clouds (LMC and SMC) and over 200° of the Southern sky is thought to be formed from multiple encounters between the LMC and SMC. In this scenario, most of the gas in the Stream and Leading Arm is stripped from the SMC, yet recent observations have shown a bifurcation of the Trailing Arm that reveals LMC origins for some of the gas. Absorption measurements in the Stream also reveal an order of magnitude more gas than in current tidal models. We present hydrodynamical simulations of the multiple encounters between the LMC and SMC at their first pass around the Milky Way, assuming that the Clouds were more extended and gas-rich in the past. Our models create filamentary structures of gas in the Trailing Stream from both the LMC and SMC. While the SMC trailing filament matches the observed Stream location, the LMC filament is offset. In addition, the total observed mass of the Stream in these models is underestimated by a factor of four when the ionized component is accounted for. Our results suggest that there should also be gas stripped from both the LMC and SMC in the Leading Arm, mirroring the bifurcation in the Trailing Stream. This prediction is consistent with recent measurements of spatial variation in chemical abundances in the Leading Arm, which show that gas from multiple sources is present, although its nature is still uncertain.

  5. FIREWORKS ALGORITHM FOR UNCONSTRAINED FUNCTION OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Evans BAIDOO

    2017-03-01

    Full Text Available Modern real world science and engineering problems can be classified as multi-objective optimisation problems which demand for expedient and efficient stochastic algorithms to respond to the optimization needs. This paper presents an object-oriented software application that implements a firework optimization algorithm for function optimization problems. The algorithm, a kind of parallel diffuse optimization algorithm is based on the explosive phenomenon of fireworks. The algorithm presented promising results when compared to other population or iterative based meta-heuristic algorithm after it was experimented on five standard benchmark problems. The software application was implemented in Java with interactive interface which allow for easy modification and extended experimentation. Additionally, this paper validates the effect of runtime on the algorithm performance.

  6. Improving the Interpretability of Classification Rules Discovered by an Ant Colony Algorithm: Extended Results.

    Science.gov (United States)

    Otero, Fernando E B; Freitas, Alex A

    2016-01-01

    Most ant colony optimization (ACO) algorithms for inducing classification rules use a ACO-based procedure to create a rule in a one-at-a-time fashion. An improved search strategy has been proposed in the cAnt-Miner[Formula: see text] algorithm, where an ACO-based procedure is used to create a complete list of rules (ordered rules), i.e., the ACO search is guided by the quality of a list of rules instead of an individual rule. In this paper we propose an extension of the cAnt-Miner[Formula: see text] algorithm to discover a set of rules (unordered rules). The main motivations for this work are to improve the interpretation of individual rules by discovering a set of rules and to evaluate the impact on the predictive accuracy of the algorithm. We also propose a new measure to evaluate the interpretability of the discovered rules to mitigate the fact that the commonly used model size measure ignores how the rules are used to make a class prediction. Comparisons with state-of-the-art rule induction algorithms, support vector machines, and the cAnt-Miner[Formula: see text] producing ordered rules are also presented.

  7. The kinematic footprints of five stellar streams in Andromeda's halo

    Science.gov (United States)

    Chapman, S. C.; Ibata, R.; Irwin, M.; Koch, A.; Letarte, B.; Martin, N.; Collins, M.; Lewis, G. F.; McConnachie, A.; Peñarrubia, J.; Rich, R. M.; Trethewey, D.; Ferguson, A.; Huxor, A.; Tanvir, N.

    2008-11-01

    We present a spectroscopic analysis of five stellar streams (`A', `B', `Cr', `Cp' and `D') as well as the extended star cluster, EC4, which lies within Stream`C', all discovered in the halo of M31 from our Canada-France-Hawaii Telescope/MegaCam survey. These spectroscopic results were initially serendipitous, making use of our existing observations from the DEep Imaging Multi-Object Spectrograph mounted on the Keck II telescope, and thereby emphasizing the ubiquity of tidal streams that account for ~70 per cent of the M31 halo stars in the targeted fields. Subsequent spectroscopy was then procured in Stream`C' and Stream`D' to trace the velocity gradient along the streams. Nine metal-rich ([Fe/H] ~ -0.7) stars at vhel = -349.5kms-1,σv,corr ~ 5.1 +/- 2.5km s-1 are proposed as a serendipitous detection of Stream`Cr', with follow-up kinematic identification at a further point along the stream. Seven metal-poor ([Fe/H] ~-1.3) stars confined to a narrow, 15 km s-1 velocity bin centred at vhel = -285.6, σv,corr = 4.3+1.7-1.4 km s-1 represent a kinematic detection of Stream`Cp', again with follow-up kinematic identification further along the stream. For the cluster EC4, candidate member stars with average [Fe/H] ~-1.4, are found at vhel = -282 suggesting it could be related to Stream`Cp'. No similarly obvious cold kinematic candidate is found for Stream`D', although candidates are proposed in both of two spectroscopic pointings along the stream (both at ~ -400km s-1). Spectroscopy near the edge of Stream`B' suggests a likely kinematic detection at vhel ~ -330, σv,corr ~ 6.9km s-1, while a candidate kinematic detection of Stream`A' is found (plausibly associated to M33 rather than M31) with vhel ~ -170, σv,corr = 12.5km s-1. The low dispersion of the streams in kinematics, physical thickness and metallicity makes it hard to reconcile with a scenario whereby these stream structures as an ensemble are related to the giant southern stream. We conclude that the M31 stellar

  8. Relation between Streaming Potential and Streaming Electrification Generated by Streaming of Water through a Sandwich-type Cell

    OpenAIRE

    Maruyama, Kazunori; Nikaido, Mitsuru; Hara, Yoshinori; Tanizaki, Yoshie

    2012-01-01

    Both streaming potential and accumulated charge of water flowed out were measured simultaneously using a sandwich-type cell. The voltages generated in divided sections along flow direction satisfied additivity. The sign of streaming potential agreed with that of streaming electrification. The relation between streaming potential and streaming electrification was explained from a viewpoint of electrical double layer in glass-water interface.

  9. A Novel Entropy-Based Decoding Algorithm for a Generalized High-Order Discrete Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Jason Chin-Tiong Chan

    2018-01-01

    Full Text Available The optimal state sequence of a generalized High-Order Hidden Markov Model (HHMM is tracked from a given observational sequence using the classical Viterbi algorithm. This classical algorithm is based on maximum likelihood criterion. We introduce an entropy-based Viterbi algorithm for tracking the optimal state sequence of a HHMM. The entropy of a state sequence is a useful quantity, providing a measure of the uncertainty of a HHMM. There will be no uncertainty if there is only one possible optimal state sequence for HHMM. This entropy-based decoding algorithm can be formulated in an extended or a reduction approach. We extend the entropy-based algorithm for computing the optimal state sequence that was developed from a first-order to a generalized HHMM with a single observational sequence. This extended algorithm performs the computation exponentially with respect to the order of HMM. The computational complexity of this extended algorithm is due to the growth of the model parameters. We introduce an efficient entropy-based decoding algorithm that used reduction approach, namely, entropy-based order-transformation forward algorithm (EOTFA to compute the optimal state sequence of any generalized HHMM. This EOTFA algorithm involves a transformation of a generalized high-order HMM into an equivalent first-order HMM and an entropy-based decoding algorithm is developed based on the equivalent first-order HMM. This algorithm performs the computation based on the observational sequence and it requires OTN~2 calculations, where N~ is the number of states in an equivalent first-order model and T is the length of observational sequence.

  10. Interactive real-time media streaming with reliable communication

    Science.gov (United States)

    Pan, Xunyu; Free, Kevin M.

    2014-02-01

    Streaming media is a recent technique for delivering multimedia information from a source provider to an end- user over the Internet. The major advantage of this technique is that the media player can start playing a multimedia file even before the entire file is transmitted. Most streaming media applications are currently implemented based on the client-server architecture, where a server system hosts the media file and a client system connects to this server system to download the file. Although the client-server architecture is successful in many situations, it may not be ideal to rely on such a system to provide the streaming service as users may be required to register an account using personal information in order to use the service. This is troublesome if a user wishes to watch a movie simultaneously while interacting with a friend in another part of the world over the Internet. In this paper, we describe a new real-time media streaming application implemented on a peer-to-peer (P2P) architecture in order to overcome these challenges within a mobile environment. When using the peer-to-peer architecture, streaming media is shared directly between end-users, called peers, with minimal or no reliance on a dedicated server. Based on the proposed software pɛvμa (pronounced [revma]), named for the Greek word meaning stream, we can host a media file on any computer and directly stream it to a connected partner. To accomplish this, pɛvμa utilizes the Microsoft .NET Framework and Windows Presentation Framework, which are widely available on various types of windows-compatible personal computers and mobile devices. With specially designed multi-threaded algorithms, the application can stream HD video at speeds upwards of 20 Mbps using the User Datagram Protocol (UDP). Streaming and playback are handled using synchronized threads that communicate with one another once a connection is established. Alteration of playback, such as pausing playback or tracking to a

  11. Algorithms for optimal dyadic decision trees

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don [Los Alamos National Laboratory; Porter, Reid [Los Alamos National Laboratory

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  12. Analyzing indicators of stream health for Minnesota streams

    Science.gov (United States)

    Singh, U.; Kocian, M.; Wilson, B.; Bolton, A.; Nieber, J.; Vondracek, B.; Perry, J.; Magner, J.

    2005-01-01

    Recent research has emphasized the importance of using physical, chemical, and biological indicators of stream health for diagnosing impaired watersheds and their receiving water bodies. A multidisciplinary team at the University of Minnesota is carrying out research to develop a stream classification system for Total Maximum Daily Load (TMDL) assessment. Funding for this research is provided by the United States Environmental Protection Agency and the Minnesota Pollution Control Agency. One objective of the research study involves investigating the relationships between indicators of stream health and localized stream characteristics. Measured data from Minnesota streams collected by various government and non-government agencies and research institutions have been obtained for the research study. Innovative Geographic Information Systems tools developed by the Environmental Science Research Institute and the University of Texas are being utilized to combine and organize the data. Simple linear relationships between index of biological integrity (IBI) and channel slope, two-year stream flow, and drainage area are presented for the Redwood River and the Snake River Basins. Results suggest that more rigorous techniques are needed to successfully capture trends in IBI scores. Additional analyses will be done using multiple regression, principal component analysis, and clustering techniques. Uncovering key independent variables and understanding how they fit together to influence stream health are critical in the development of a stream classification for TMDL assessment.

  13. Reconstructing the Dwarf Galaxy Progenitor from Tidal Streams Using MilkyWay@home

    Science.gov (United States)

    Newberg, Heidi; Shelton, Siddhartha

    2018-04-01

    We attempt to reconstruct the mass and radial profile of stars and dark matter in the dwarf galaxy progenitor of the Orphan Stream, using only information from the stars in the Orphan Stream. We show that given perfect data and perfect knowledge of the dwarf galaxy profile and Milky Way potential, we are able to reconstruct the mass and radial profiles of both the stars and dark matter in the progenitor to high accuracy using only the density of stars along the stream and either the velocity dispersion or width of the stream in the sky. To perform this test, we simulated the tidal disruption of a two component (stars and dark matter) dwarf galaxy along the orbit of the Orphan Stream. We then created a histogram of the density of stars along the stream and a histogram of either the velocity dispersion or width of the stream in the sky as a function of position along the stream. The volunteer supercomputer MilkyWay@home was given these two histograms, the Milky Way potential model, and the orbital parameters for the progenitor. N-body simulations were run, varying dwarf galaxy parameters and the time of disruption. The goodness-of-fit of the model to the data was determined using an Earth-Mover Distance algorithm. The parameters were optimized using Differential Evolution. Future work will explore whether currently available information on the Orphan Stream stars is sufficient to constrain its progenitor, and how sensitive the optimization is to our knowledge of the Milky Way potential and the density model of the dwarf galaxy progenitor, as well as a host of other real-life unknowns.

  14. Fault-Tolerant and Elastic Streaming MapReduce with Decentralized Coordination

    Energy Technology Data Exchange (ETDEWEB)

    Kumbhare, Alok [Univ. of Southern California, Los Angeles, CA (United States); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Simmhan, Yogesh [Indian Inst. of Technology (IIT), Bangalore (India); Prasanna, Viktor K. [Univ. of Southern California, Los Angeles, CA (United States)

    2015-06-29

    The MapReduce programming model, due to its simplicity and scalability, has become an essential tool for processing large data volumes in distributed environments. Recent Stream Processing Systems (SPS) extend this model to provide low-latency analysis of high-velocity continuous data streams. However, integrating MapReduce with streaming poses challenges: first, the runtime variations in data characteristics such as data-rates and key-distribution cause resource overload, that inturn leads to fluctuations in the Quality of the Service (QoS); and second, the stateful reducers, whose state depends on the complete tuple history, necessitates efficient fault-recovery mechanisms to maintain the desired QoS in the presence of resource failures. We propose an integrated streaming MapReduce architecture leveraging the concept of consistent hashing to support runtime elasticity along with locality-aware data and state replication to provide efficient load-balancing with low-overhead fault-tolerance and parallel fault-recovery from multiple simultaneous failures. Our evaluation on a private cloud shows up to 2:8 improvement in peak throughput compared to Apache Storm SPS, and a low recovery latency of 700 -1500 ms from multiple failures.

  15. Efficient User Authentication and Key Management for Peer-to-Peer Live Streaming Systems

    Institute of Scientific and Technical Information of China (English)

    LIU Xuening; YIN Hao; LIN Chuang; DU Changlai

    2009-01-01

    Recent development of the peer-to-peer (P2P) live streaming technique has brought unprece-dented new momentum to the Internet with the characters of effective, scalable, and low cost. However, be-fore these applications can be successfully deployed as commercial applications, efficient access control mechanisms are needed. This work based on earlier research of the secure streaming architecture in Trust-Stream, analyzes how to ensure that only authorized users can access the original media in the P2P live streaming system by adopting a user authentication and key management scheme. The major features of this system include (1) the management server issues each authorized user a unique public key certificate,(2) the one-way hash chain extends the certificate's lifetime, (3) the original media is encrypted by the ses-sion key and delivered to the communication group, and (4) the session key is periodically updated and dis-tributed with the media. Finally, analyses and test results show that scheme provides a secure, scalable, re-liable, and efficient access control solution for P2P live streaming systems.

  16. Energy conservation in Newmark based time integration algorithms

    DEFF Research Database (Denmark)

    Krenk, Steen

    2006-01-01

    Energy balance equations are established for the Newmark time integration algorithm, and for the derived algorithms with algorithmic damping introduced via averaging, the so-called a-methods. The energy balance equations form a sequence applicable to: Newmark integration of the undamped equations...... of motion, an extended form including structural damping, and finally the generalized form including structural as well as algorithmic damping. In all three cases the expression for energy, appearing in the balance equation, is the mechanical energy plus some additional terms generated by the discretization...

  17. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    Science.gov (United States)

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  18. Efficient Algorithms for gcd and Cubic Residuosity in the Ring of Eisenstein Integers

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present simple and efficient algorithms for computing gcd and cubic residuosity in the ring of Eisenstein integers, bf Z[ ]i.e. the integers extended with , a complex primitive third root of unity. The algorithms are similar and may be seen as generalisations of the binary integer gcd and deri......We present simple and efficient algorithms for computing gcd and cubic residuosity in the ring of Eisenstein integers, bf Z[ ]i.e. the integers extended with , a complex primitive third root of unity. The algorithms are similar and may be seen as generalisations of the binary integer gcd...

  19. Natural stream flow-rates measurements by tracer techniques

    International Nuclear Information System (INIS)

    Cuellar Mansilla, J.

    1982-01-01

    This paper presents the study of the precision obtained measuring the natural stream flow rates by tracer techniques, especially when the system presents a great slope and a bed constituted by large and extended particle size. The experiences were realized in laboratory pilot channels with flow-rates between 15 and 130 [1/s]; and in natural streams with flow-rates from 1 to 25 m 3 /s. Tracer used were In-133m and Br-82 for laboratory and field measurements respectively. In both cases the tracer was injected as a pulse and its dilution measured collecting samples in the measured section, at constant flow-rates, of 5[1] in laboratory experiences and 60[1] of water in field experiences. Precisions obtained at a 95% confidence level were about 2% for laboratory and 3% for field. (I.V.)

  20. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    Science.gov (United States)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  1. Iterative-Transform Phase Diversity: An Object and Wavefront Recovery Algorithm

    Science.gov (United States)

    Smith, J. Scott

    2011-01-01

    Presented is a solution for recovering the wavefront and an extended object. It builds upon the VSM architecture and deconvolution algorithms. Simulations are shown for recovering the wavefront and extended object from noisy data.

  2. Methods of extending crop signatures from one area to another

    Science.gov (United States)

    Minter, T. C. (Principal Investigator)

    1979-01-01

    Efforts to develop a technology for signature extension during LACIE phases 1 and 2 are described. A number of haze and Sun angle correction procedures were developed and tested. These included the ROOSTER and OSCAR cluster-matching algorithms and their modifications, the MLEST and UHMLE maximum likelihood estimation procedures, and the ATCOR procedure. All these algorithms were tested on simulated data and consecutive-day LANDSAT imagery. The ATCOR, OSCAR, and MLEST algorithms were also tested for their capability to geographically extend signatures using LANDSAT imagery.

  3. Extended radio sources in the cluster environment

    International Nuclear Information System (INIS)

    Burns, J.O. Jr.

    1979-01-01

    Extended radio galaxies that lie in rich and poor clusters were studied. A sample of 3CR and 4C radio sources that spatially coincide with poor Zwicky clusters of galaxies was observed to obtain accurate positions and flux densities. Then interferometer observations at a resolution of approx. = 10 arcsec were performed on the sample. The resulting maps were used to determine the nature of the extended source structure, to make secure optical identifications, and to eliminate possible background sources. The results suggest that the environments around both classical double and head-tail radio sources are similar in rich and poor clusters. The majority of the poor cluster sources exhibit some signs of morphological distortion (i.e., head-tails) indicative of dynamic interaction with a relatively dense intracluster medium. A large fraction (60 to 100%) of all radio sources appear to be members of clusters of galaxies if one includes both poor and rich cluster sources. Detailed total intensity and polarization observations for a more restricted sample of two classical double sources and nine head-tail galaxies were also performed. The purpose was to examine the spatial distributions of spectral index and polarization. Thin streams of radio emission appear to connect the nuclear radio-point components to the more extended structures in the head-tail galaxies. It is suggested that a non-relativistic plasma beam can explain both the appearance of the thin streams and larger-scale structure as well as the energy needed to generate the observed radio emission. The rich and poor radio cluster samples are combined to investigate the relationship between source morphology and the scale sizes of clustering. There is some indication that a large fraction of radio sources, including those in these samples, are in superclusters of galaxies

  4. FPGA helix tracking algorithm for PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Yutie; Galuska, Martin; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Muenchow, David; Spruck, Bjoern [II. Physikalisches Institut, Giessen University (Germany); Ye, Hua [Institute of High Energy Physics, Beijing (China); Collaboration: PANDA-Collaboration

    2015-07-01

    The PANDA detector is a general-purpose detector for physics with high luminosity cooled antiproton beams, planed to operate at the FAIR facility in Darmstadt, Germany. The central detector includes a silicon Micro Vertex Detector (MVD) and a Straw Tube Tracker (STT). Without any hardware trigger, large amounts of raw data are streaming into the data acquisition system. The data reduction task is performed in the online system by reconstruction algorithms programmed on FPGAs (Field Programmable Gate Arrays) as first level and on a farm of GPUs or PCs as a second level. One important part in the system is the online track reconstruction. In this presentation, an online tracking algorithm for helix tracking reconstruction in the solenoidal field is shown. The tracking algorithm is composed by two parts, a road finding module followed by an iterative helix parameter calculation module. A performance study using C++ and the status of the VHDL implementation are presented.

  5. Information-Theoretic Data Discarding for Dynamic Trees on Data Streams

    Directory of Open Access Journals (Sweden)

    Christoforos Anagnostopoulos

    2013-12-01

    Full Text Available Ubiquitous automated data collection at an unprecedented scale is making available streaming, real-time information flows in a wide variety of settings, transforming both science and industry. Learning algorithms deployed in such contexts often rely on single-pass inference, where the data history is never revisited. Learning may also need to be temporally adaptive to remain up-to-date against unforeseen changes in the data generating mechanism. Online Bayesian inference remains challenged by such transient, evolving data streams. Nonparametric modeling techniques can prove particularly ill-suited, as the complexity of the model is allowed to increase with the sample size. In this work, we take steps to overcome these challenges by porting information theoretic heuristics, such as exponential forgetting and active learning, into a fully Bayesian framework. We showcase our methods by augmenting a modern non-parametric modeling framework, dynamic trees, and illustrate its performance on a number of practical examples. The end product is a powerful streaming regression and classification tool, whose performance compares favorably to the state-of-the-art.

  6. Applying Kitaev's algorithm in an ion trap quantum computer

    International Nuclear Information System (INIS)

    Travaglione, B.; Milburn, G.J.

    2000-01-01

    Full text: Kitaev's algorithm is a method of estimating eigenvalues associated with an operator. Shor's factoring algorithm, which enables a quantum computer to crack RSA encryption codes, is a specific example of Kitaev's algorithm. It has been proposed that the algorithm can also be used to generate eigenstates. We extend this proposal for small quantum systems, identifying the conditions under which the algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate a simple example, in which the algorithm effectively generates eigenstates

  7. Atmospheric deposition, retention, and stream export of dioxins and PCBs in a pristine boreal catchment

    International Nuclear Information System (INIS)

    Bergknut, Magnus; Laudon, Hjalmar; Jansson, Stina; Larsson, Anna; Gocht, Tilman; Wiberg, Karin

    2011-01-01

    The mass-balance between diffuse atmospheric deposition of organic pollutants, amount of pollutants retained by the terrestrial environment, and levels of pollutants released to surface stream waters was studied in a pristine northern boreal catchment. This was done by comparing the input of atmospheric deposition of polychlorinated dibenzo-p-dioxins and furans (PCDD/Fs) and PCBs with the amounts exported to surface waters. Two types of deposition samplers were used, equipped with a glass fibre thimble and an Amberlite sampler respectively. The measured fluxes showed clear seasonality, with most of the input and export occurring during winter and spring flood, respectively. The mass balance calculations indicates that the boreal landscape is an effective sink for PCDD/Fs and PCBs, as 96.0-99.9 % of received bulk deposition was retained, suggesting that organic pollutants will continue to impact stream water in the region for an extended period of time. - Graphical abstract: Display Omitted Highlights: → The fluxes of organic pollutants in a pristine boreal catchment were measured. → Most of the input and export occurred during winter and spring flood. → 96.0-99.9% of received bulk deposition was retained by the landscape. → Organic pollutants will impact boreal stream waters for an extended period of time. - The boreal landscape is effective in retaining diffuse atmospheric deposition of dioxins and PCBs, slowly releasing these pollutants into nearby streams.

  8. Object-Oriented Implementation of Adaptive Mesh Refinement Algorithms

    Directory of Open Access Journals (Sweden)

    William Y. Crutchfield

    1993-01-01

    Full Text Available We describe C++ classes that simplify development of adaptive mesh refinement (AMR algorithms. The classes divide into two groups, generic classes that are broadly useful in adaptive algorithms, and application-specific classes that are the basis for our AMR algorithm. We employ two languages, with C++ responsible for the high-level data structures, and Fortran responsible for low-level numerics. The C++ implementation is as fast as the original Fortran implementation. Use of inheritance has allowed us to extend the original AMR algorithm to other problems with greatly reduced development time.

  9. Multi-stream face recognition for crime-fighting

    Science.gov (United States)

    Jassim, Sabah A.; Sellahewa, Harin

    2007-04-01

    Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.

  10. BAM: Bayesian AMHG-Manning Inference of Discharge Using Remotely Sensed Stream Width, Slope, and Height

    Science.gov (United States)

    Hagemann, M. W.; Gleason, C. J.; Durand, M. T.

    2017-11-01

    The forthcoming Surface Water and Ocean Topography (SWOT) NASA satellite mission will measure water surface width, height, and slope of major rivers worldwide. The resulting data could provide an unprecedented account of river discharge at continental scales, but reliable methods need to be identified prior to launch. Here we present a novel algorithm for discharge estimation from only remotely sensed stream width, slope, and height at multiple locations along a mass-conserved river segment. The algorithm, termed the Bayesian AMHG-Manning (BAM) algorithm, implements a Bayesian formulation of streamflow uncertainty using a combination of Manning's equation and at-many-stations hydraulic geometry (AMHG). Bayesian methods provide a statistically defensible approach to generating discharge estimates in a physically underconstrained system but rely on prior distributions that quantify the a priori uncertainty of unknown quantities including discharge and hydraulic equation parameters. These were obtained from literature-reported values and from a USGS data set of acoustic Doppler current profiler (ADCP) measurements at USGS stream gauges. A data set of simulated widths, slopes, and heights from 19 rivers was used to evaluate the algorithms using a set of performance metrics. Results across the 19 rivers indicate an improvement in performance of BAM over previously tested methods and highlight a path forward in solving discharge estimation using solely satellite remote sensing.

  11. A Stream Tilling Approach to Surface Area Estimation for Large Scale Spatial Data in a Shared Memory System

    Directory of Open Access Journals (Sweden)

    Liu Jiping

    2017-12-01

    Full Text Available Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.

  12. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    Science.gov (United States)

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system

    Science.gov (United States)

    Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na

    2013-01-01

    We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.

  14. Calculating Graph Algorithms for Dominance and Shortest Path

    DEFF Research Database (Denmark)

    Sergey, Ilya; Midtgaard, Jan; Clarke, Dave

    2012-01-01

    We calculate two iterative, polynomial-time graph algorithms from the literature: a dominance algorithm and an algorithm for the single-source shortest path problem. Both algorithms are calculated directly from the definition of the properties by fixed-point fusion of (1) a least fixed point...... expressing all finite paths through a directed graph and (2) Galois connections that capture dominance and path length. The approach illustrates that reasoning in the style of fixed-point calculus extends gracefully to the domain of graph algorithms. We thereby bridge common practice from the school...... of program calculation with common practice from the school of static program analysis, and build a novel view on iterative graph algorithms as instances of abstract interpretation...

  15. Shifting stream planform state decreases stream productivity yet increases riparian animal production

    Science.gov (United States)

    Venarsky, Michael P.; Walters, David M.; Hall, Robert O.; Livers, Bridget; Wohl, Ellen

    2018-01-01

    In the Colorado Front Range (USA), disturbance history dictates stream planform. Undisturbed, old-growth streams have multiple channels and large amounts of wood and depositional habitat. Disturbed streams (wildfires and logging tested how these opposing stream states influenced organic matter, benthic macroinvertebrate secondary production, emerging aquatic insect flux, and riparian spider biomass. Organic matter and macroinvertebrate production did not differ among sites per unit area (m−2), but values were 2 ×–21 × higher in undisturbed reaches per unit of stream valley (m−1 valley) because total stream area was higher in undisturbed reaches. Insect emergence was similar among streams at the per unit area and per unit of stream valley. However, rescaling insect emergence to per meter of stream bank showed that the emerging insect biomass reaching the stream bank was lower in undisturbed sites because multi-channel reaches had 3 × more stream bank than single-channel reaches. Riparian spider biomass followed the same pattern as emerging aquatic insects, and we attribute this to bottom-up limitation caused by the multi-channeled undisturbed sites diluting prey quantity (emerging insects) reaching the stream bank (riparian spider habitat). These results show that historic landscape disturbances continue to influence stream and riparian communities in the Colorado Front Range. However, these legacy effects are only weakly influencing habitat-specific function and instead are primarily influencing stream–riparian community productivity by dictating both stream planform (total stream area, total stream bank length) and the proportional distribution of specific habitat types (pools vs riffles).

  16. Characterization of high level nuclear waste glass samples following extended melter idling

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Kevin M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Peeler, David K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kruger, Albert A. [USDOE Office of River Protection, Richland, WA (United States)

    2015-06-16

    The Savannah River Site Defense Waste Processing Facility (DWPF) melter was recently idled with glass remaining in the melt pool and riser for approximately three months. This situation presented a unique opportunity to collect and analyze glass samples since outages of this duration are uncommon. The objective of this study was to obtain insight into the potential for crystal formation in the glass resulting from an extended idling period. The results will be used to support development of a crystal-tolerant approach for operation of the high-level waste melter at the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Two glass pour stream samples were collected from DWPF when the melter was restarted after idling for three months. The samples did not contain crystallization that was detectible by X-ray diffraction. Electron microscopy identified occasional spinel and noble metal crystals of no practical significance. Occasional platinum particles were observed by microscopy as an artifact of the sample collection method. Reduction/oxidation measurements showed that the pour stream glasses were fully oxidized, which was expected after the extended idling period. Chemical analysis of the pour stream glasses revealed slight differences in the concentrations of some oxides relative to analyses of the melter feed composition prior to the idling period. While these differences may be within the analytical error of the laboratories, the trends indicate that there may have been some amount of volatility associated with some of the glass components, and that there may have been interaction of the glass with the refractory components of the melter. These changes in composition, although small, can be attributed to the idling of the melter for an extended period. The changes in glass composition resulted in a 70-100 °C increase in the predicted spinel liquidus temperature (TL) for the pour stream glass samples relative to the analysis of the melter feed prior to

  17. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.

  18. An Adaptive Sweep-Circle Spatial Clustering Algorithm Based on Gestalt

    Directory of Open Access Journals (Sweden)

    Qingming Zhan

    2017-08-01

    Full Text Available An adaptive spatial clustering (ASC algorithm is proposed in this present study, which employs sweep-circle techniques and a dynamic threshold setting based on the Gestalt theory to detect spatial clusters. The proposed algorithm can automatically discover clusters in one pass, rather than through the modification of the initial model (for example, a minimal spanning tree, Delaunay triangulation, or Voronoi diagram. It can quickly identify arbitrarily-shaped clusters while adapting efficiently to non-homogeneous density characteristics of spatial data, without the need for prior knowledge or parameters. The proposed algorithm is also ideal for use in data streaming technology with dynamic characteristics flowing in the form of spatial clustering in large data sets.

  19. Shifting stream planform state decreases stream productivity yet increases riparian animal production

    Science.gov (United States)

    Venarsky, Michael P.; Walters, David M.; Hall, Robert O.; Livers, Bridget; Wohl, Ellen

    2018-01-01

    In the Colorado Front Range (USA), disturbance history dictates stream planform. Undisturbed, old-growth streams have multiple channels and large amounts of wood and depositional habitat. Disturbed streams (wildfires and logging production, emerging aquatic insect flux, and riparian spider biomass. Organic matter and macroinvertebrate production did not differ among sites per unit area (m−2), but values were 2 ×–21 × higher in undisturbed reaches per unit of stream valley (m−1 valley) because total stream area was higher in undisturbed reaches. Insect emergence was similar among streams at the per unit area and per unit of stream valley. However, rescaling insect emergence to per meter of stream bank showed that the emerging insect biomass reaching the stream bank was lower in undisturbed sites because multi-channel reaches had 3 × more stream bank than single-channel reaches. Riparian spider biomass followed the same pattern as emerging aquatic insects, and we attribute this to bottom-up limitation caused by the multi-channeled undisturbed sites diluting prey quantity (emerging insects) reaching the stream bank (riparian spider habitat). These results show that historic landscape disturbances continue to influence stream and riparian communities in the Colorado Front Range. However, these legacy effects are only weakly influencing habitat-specific function and instead are primarily influencing stream–riparian community productivity by dictating both stream planform (total stream area, total stream bank length) and the proportional distribution of specific habitat types (pools vs riffles).

  20. ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    I Made Oka Widyantara

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG-DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  1. Machine Learning-based Transient Brokers for Real-time Classification of the LSST Alert Stream

    Science.gov (United States)

    Narayan, Gautham; Zaidi, Tayeb; Soraisam, Monika; ANTARES Collaboration

    2018-01-01

    The number of transient events discovered by wide-field time-domain surveys already far outstrips the combined followup resources of the astronomical community. This number will only increase as we progress towards the commissioning of the Large Synoptic Survey Telescope (LSST), breaking the community's current followup paradigm. Transient brokers - software to sift through, characterize, annotate and prioritize events for followup - will be a critical tool for managing alert streams in the LSST era. Developing the algorithms that underlie the brokers, and obtaining simulated LSST-like datasets prior to LSST commissioning, to train and test these algorithms are formidable, though not insurmountable challenges. The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is a joint project of the National Optical Astronomy Observatory and the Department of Computer Science at the University of Arizona. We have been developing completely automated methods to characterize and classify variable and transient events from their multiband optical photometry. We describe the hierarchical ensemble machine learning algorithm we are developing, and test its performance on sparse, unevenly sampled, heteroskedastic data from various existing observational campaigns, as well as our progress towards incorporating these into a real-time event broker working on live alert streams from time-domain surveys.

  2. Cardiovascular System Sonographic Evaluation Algorithm: A New Sonographic Algorithm for Evaluation of the Fetal Cardiovascular System in the Second Trimester.

    Science.gov (United States)

    De León-Luis, Juan; Bravo, Coral; Gámez, Francisco; Ortiz-Quintana, Luis

    2015-07-01

    To evaluate the reproducibility and feasibility of the new cardiovascular system sonographic evaluation algorithm for studying the extended fetal cardiovascular system, including the portal, thymic, and supra-aortic areas, in the second trimester of pregnancy (19-22 weeks). We performed a cross-sectional study of pregnant women with healthy fetuses (singleton and twin pregnancies) attending our center from March to August 2011. The extended fetal cardiovascular system was evaluated by following the new algorithm, a sequential acquisition of axial views comprising the following (caudal to cranial): I, portal sinus; II, ductus venosus; III, hepatic veins; IV, 4-chamber view; V, left ventricular outflow tract; VI, right ventricular outflow tract; VII, 3-vessel and trachea view; VIII, thy-box; and IX, subclavian arteries. Interobserver agreement on the feasibility and exploration time was estimated in a subgroup of patients. The feasibility and exploration time were determined for the main cohort. Maternal, fetal, and sonographic factors affecting both features were evaluated. Interobserver agreement was excellent for all views except view VIII; the difference in the mean exploration time between observers was 1.5 minutes (95% confidence interval, 0.7-2.1 minutes; P cardiovascular system sonographic evaluation algorithm is a reproducible and feasible approach for exploration of the extended fetal cardiovascular system in a second-trimester scan. It can be used to explore these areas in normal and abnormal conditions and provides an integrated image of extended fetal cardiovascular anatomy. © 2015 by the American Institute of Ultrasound in Medicine.

  3. An Extended Spectral-Spatial Classification Approach for Hyperspectral Data

    Science.gov (United States)

    Akbari, D.

    2017-11-01

    In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  4. Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.

    Science.gov (United States)

    Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan

    2018-04-01

    The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have

  5. Quadrature demodulation based circuit implementation of pulse stream for ultrasonic signal FRI sparse sampling

    International Nuclear Information System (INIS)

    Shoupeng, Song; Zhou, Jiang

    2017-01-01

    Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry. (paper)

  6. THE COS/UVES ABSORPTION SURVEY OF THE MAGELLANIC STREAM. I. ONE-TENTH SOLAR ABUNDANCES ALONG THE BODY OF THE STREAM

    International Nuclear Information System (INIS)

    Fox, Andrew J.; Richter, Philipp; Wakker, Bart P.; Lehner, Nicolas; Howk, J. Christopher; Ben Bekhti, Nadya; Bland-Hawthorn, Joss; Lucas, Stephen

    2013-01-01

    The Magellanic Stream (MS) is a massive and extended tail of multi-phase gas stripped out of the Magellanic Clouds and interacting with the Galactic halo. In this first paper of an ongoing program to study the Stream in absorption, we present a chemical abundance analysis based on HST/COS and VLT/UVES spectra of four active galactic nuclei (RBS 144, NGC 7714, PHL 2525, and HE 0056-3622) lying behind the MS. Two of these sightlines yield good MS metallicity measurements: toward RBS 144 we measure a low MS metallicity of [S/H] = [S II/H I] = –1.13 ± 0.16 while toward NGC 7714 we measure [O/H] = [O I/H I] = –1.24 ± 0.20. Taken together with the published MS metallicity toward NGC 7469, these measurements indicate a uniform abundance of ≈0.1 solar along the main body of the Stream. This provides strong support to a scenario in which most of the Stream was tidally stripped from the SMC ≈ 1.5-2.5 Gyr ago (a time at which the SMC had a metallicity of ≈0.1 solar), as predicted by several N-body simulations. However, in Paper II of this series, we report a much higher metallicity (S/H = 0.5 solar) in the inner Stream toward Fairall 9, a direction sampling a filament of the MS that Nidever et al. claim can be traced kinematically to the Large Magellanic Cloud, not the Small Magellanic Cloud. This shows that the bifurcation of the Stream is evident in its metal enrichment, as well as its spatial extent and kinematics. Finally we measure a similar low metallicity [O/H] = [O I/H I] = –1.03 ± 0.18 in the v LSR = 150 km s –1 cloud toward HE 0056-3622, which belongs to a population of anomalous velocity clouds near the south Galactic pole. This suggests these clouds are associated with the Stream or more distant structures (possibly the Sculptor Group, which lies in this direction at the same velocity), rather than tracing foreground Galactic material

  7. Performance of Сellular Automata-based Stream Ciphers in GPU Implementation

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2016-01-01

    Full Text Available Earlier the author had developed methods to build high-performance generalized cellular automata-based symmetric ciphers, which allow obtaining the encryption algorithms that show extremely high performance in hardware implementation. However, their implementation based on the conventional microprocessors lacks high performance. The mere fact is quite common - it shows a scope of applications for these ciphers. Nevertheless, the use of graphic processors enables achieving an appropriate performance for a software implementation.The article is extension of a series of the articles, which study various aspects to construct and implement cryptographic algorithms based on the generalized cellular automata. The article is aimed at studying the capabilities to implement the GPU-based cryptographic algorithms under consideration.Representing a key generator, the implemented encryption algorithm comprises 2k generalized cellular automata. The cellular automata graphs are Ramanujan’s ones. The cells of produced k gamma streams alternate, thereby allowing the GPU capabilities to be better used.To implement was used OpenCL, as the most universal and platform-independent API. The software written in C ++ was designed so that the user could set various parameters, including the encryption key, the graph structure, the local communication function, various constants, etc. To test were used a variety of graphics processors (NVIDIA GTX 650; NVIDIA GTX 770; AMD R9 280X.Depending on operating conditions, and GPU used, a performance range is from 0.47 to 6.61 Gb / s, which is comparable to the performance of the countertypes.Thus, the article has demonstrated that using the GPU makes it is possible to provide efficient software implementation of stream ciphers based on the generalized cellular automata.This work was supported by the RFBR, the project №16-07-00542.

  8. Akamai Streaming

    OpenAIRE

    ECT Team, Purdue

    2007-01-01

    Akamai offers world-class streaming media services that enable Internet content providers and enterprises to succeed in today's Web-centric marketplace. They deliver live event Webcasts (complete with video production, encoding, and signal acquisition services), streaming media on demand, 24/7 Webcasts and a variety of streaming application services based upon their EdgeAdvantage.

  9. Multi-robot task allocation based on two dimensional artificial fish swarm algorithm

    Science.gov (United States)

    Zheng, Taixiong; Li, Xueqin; Yang, Liangyi

    2007-12-01

    The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.

  10. The Midwest Stream Quality Assessment—Influences of human activities on streams

    Science.gov (United States)

    Van Metre, Peter C.; Mahler, Barbara J.; Carlisle, Daren M.; Coles, James F.

    2018-04-16

    Healthy streams and the fish and other organisms that live in them contribute to our quality of life. Extensive modification of the landscape in the Midwestern United States, however, has profoundly affected the condition of streams. Row crops and pavement have replaced grasslands and woodlands, streams have been straightened, and wetlands and fields have been drained. Runoff from agricultural and urban land brings sediment and chemicals to streams. What is the chemical, physical, and biological condition of Midwestern streams? Which physical and chemical stressors are adversely affecting biological communities, what are their origins, and how might we lessen or avoid their adverse effects?In 2013, the U.S. Geological Survey (USGS) conducted the Midwest Stream Quality Assessment to evaluate how human activities affect the biological condition of Midwestern streams. In collaboration with the U.S. Environmental Protection Agency National Rivers and Streams Assessment, the USGS sampled 100 streams, chosen to be representative of the different types of watersheds in the region. Biological condition was evaluated based on the number and diversity of fish, algae, and invertebrates in the streams. Changes to the physical habitat and chemical characteristics of the streams—“stressors”—were assessed, and their relation to landscape factors and biological condition was explored by using mathematical models. The data and models help us to better understand how the human activities on the landscape are affecting streams in the region.

  11. Some multigrid algorithms for SIMD machines

    Energy Technology Data Exchange (ETDEWEB)

    Dendy, J.E. Jr. [Los Alamos National Lab., NM (United States)

    1996-12-31

    Previously a semicoarsening multigrid algorithm suitable for use on SIMD architectures was investigated. Through the use of new software tools, the performance of this algorithm has been considerably improved. The method has also been extended to three space dimensions. The method performs well for strongly anisotropic problems and for problems with coefficients jumping by orders of magnitude across internal interfaces. The parallel efficiency of this method is analyzed, and its actual performance on the CM-5 is compared with its performance on the CRAY-YMP. A standard coarsening multigrid algorithm is also considered, and we compare its performance on these two platforms as well.

  12. A spectroscopic survey of EC4, an extended cluster in Andromeda's halo

    Science.gov (United States)

    Collins, M. L. M.; Chapman, S. C.; Irwin, M.; Ibata, R.; Martin, N. F.; Ferguson, A. M. N.; Huxor, A.; Lewis, G. F.; Mackey, A. D.; McConnachie, A. W.; Tanvir, N.

    2009-07-01

    We present a spectroscopic survey of candidate red giant branch stars in the extended star cluster, EC4, discovered in the halo of M31 from our Canada-France-Hawaii Telescope/MegaCam survey, overlapping the tidal streams, Streams`Cp' and `Cr'. These observations used the DEep Imaging Multi-Object Spectrograph mounted on the Keck II telescope to obtain spectra around the CaII triplet region with ~1.3 Å resolution. Six stars lying on the red giant branch within two core radii of the centre of EC4 are found to have an average vr = -287.9+1.9-2.4kms-1 and σv,corr = 2.7+4.2-2.7kms-1, taking instrumental errors into account. The resulting mass-to-light ratio for EC4 is M/L = 6.7+15-6.7Msolar/Lsolar, a value that is consistent with a globular cluster within the 1σ errors we derive. From the summed spectra of our member stars, we find EC4 to be metal-poor, with [Fe/H] = -1.6 +/- 0.15. We discuss several formation and evolution scenarios which could account for our kinematic and metallicity constraints on EC4, and conclude that EC4 is most comparable with an extended globular cluster. We also compare the kinematics and metallicity of EC4 with Streams `Cp' and`Cr', and find that EC4 bears a striking resemblance to Stream`Cp' in terms of velocity, and that the two structures are identical in terms of both their spectroscopic and photometric metallicities. From this, we conclude that EC4 is likely related to Stream`Cp'. The data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. E-mail: mlmc2@ast.cam.ac.uk

  13. Real-Time Joint Streaming Data Processing from Social and Physical Sensors

    Science.gov (United States)

    Kropivnitskaya, Y. Y.; Qin, J.; Tiampo, K. F.; Bauer, M.

    2014-12-01

    The results of the technological breakthroughs in computing that have taken place over the last few decades makes it possible to achieve emergency management objectives that focus on saving human lives and decreasing economic effects. In particular, the integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better real-time seismic hazard analysis through distributed computing networks. The main goal of this work is to utilize innovative computational algorithms for better real-time seismic risk analysis by integrating different data sources and processing tools into streaming and cloud computing applications. The Geological Survey of Canada operates the Canadian National Seismograph Network (CNSN) with over 100 high-gain instruments and 60 low-gain or strong motion seismographs. The processing of the continuous data streams from each station of the CNSN provides the opportunity to detect possible earthquakes in near real-time. The information from physical sources is combined to calculate a location and magnitude for an earthquake. The automatically calculated results are not always sufficiently precise and prompt that can significantly reduce the response time to a felt or damaging earthquake. Social sensors, here represented as Twitter users, can provide information earlier to the general public and more rapidly to the emergency planning and disaster relief agencies. We introduce joint streaming data processing from social and physical sensors in real-time based on the idea that social media observations serve as proxies for physical sensors. By using the streams of data in the form of Twitter messages, each of which has an associated time and location, we can extract information related to a target event and perform enhanced analysis by combining it with physical sensor data. Results of this work suggest that the use of data from social media, in conjunction

  14. Analysis of hydraulic characteristics for stream diversion in small stream

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang-Jin; Jun, Kye-Won [Chungbuk National University, Cheongju(Korea)

    2001-10-31

    This study is the analysis of hydraulic characteristics for stream diversion reach by numerical model test. Through it we can provide the basis data in flood, and in grasping stream flow characteristics. Analysis of hydraulic characteristics in Seoknam stream were implemented by using computer model HEC-RAS(one-dimensional model) and RMA2(two-dimensional finite element model). As a result we became to know that RMA2 to simulate left, main channel, right in stream is more effective method in analysing flow in channel bends, steep slope, complex bed form effect stream flow characteristics, than HEC-RAS. (author). 13 refs., 3 tabs., 5 figs.

  15. The metaphors we stream by: Making sense of music streaming

    OpenAIRE

    Hagen, Anja Nylund

    2016-01-01

    In Norway music-streaming services have become mainstream in everyday music listening. This paper examines how 12 heavy streaming users make sense of their experiences with Spotify and WiMP Music (now Tidal). The analysis relies on a mixed-method qualitative study, combining music-diary self-reports, online observation of streaming accounts, Facebook and last.fm scrobble-logs, and in-depth interviews. By drawing on existing metaphors of Internet experiences we demonstrate that music-streaming...

  16. Recursive parameter estimation for Hammerstein-Wiener systems using modified EKF algorithm.

    Science.gov (United States)

    Yu, Feng; Mao, Zhizhong; Yuan, Ping; He, Dakuo; Jia, Mingxing

    2017-09-01

    This paper focuses on the recursive parameter estimation for the single input single output Hammerstein-Wiener system model, and the study is then extended to a rarely mentioned multiple input single output Hammerstein-Wiener system. Inspired by the extended Kalman filter algorithm, two basic recursive algorithms are derived from the first and the second order Taylor approximation. Based on the form of the first order approximation algorithm, a modified algorithm with larger parameter convergence domain is proposed to cope with the problem of small parameter convergence domain of the first order one and the application limit of the second order one. The validity of the modification on the expansion of convergence domain is shown from the convergence analysis and is demonstrated with two simulation cases. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Radar correlated imaging for extended target by the combination of negative exponential restraint and total variation

    Science.gov (United States)

    Qian, Tingting; Wang, Lianlian; Lu, Guanghua

    2017-07-01

    Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.

  18. A Context-Aware Adaptive Streaming Media Distribution System in a Heterogeneous Network with Multiple Terminals

    Directory of Open Access Journals (Sweden)

    Yepeng Ni

    2016-01-01

    Full Text Available We consider the problem of streaming media transmission in a heterogeneous network from a multisource server to home multiple terminals. In wired network, the transmission performance is limited by network state (e.g., the bandwidth variation, jitter, and packet loss. In wireless network, the multiple user terminals can cause bandwidth competition. Thus, the streaming media distribution in a heterogeneous network becomes a severe challenge which is critical for QoS guarantee. In this paper, we propose a context-aware adaptive streaming media distribution system (CAASS, which implements the context-aware module to perceive the environment parameters and use the strategy analysis (SA module to deduce the most suitable service level. This approach is able to improve the video quality for guarantying streaming QoS. We formulate the optimization problem of QoS relationship with the environment parameters based on the QoS testing algorithm for IPTV in ITU-T G.1070. We evaluate the performance of the proposed CAASS through 12 types of experimental environments using a prototype system. Experimental results show that CAASS can dynamically adjust the service level according to the environment variation (e.g., network state and terminal performances and outperforms the existing streaming approaches in adaptive streaming media distribution according to peak signal-to-noise ratio (PSNR.

  19. Stream periphyton responses to mesocosm treatments of ...

    Science.gov (United States)

    A stream mesocosm experiment was designed to compare biotic responses among streams exposed to an equal excess specific conductivity target of 850 µS/cm relative to a control that was set for 200 µS/cm and three treatments comprised of different major ion contents. Each treatment and the control was replicated 4 times at the mesocosm scale (16 mesocosms total). The treatments were based on dosing the background mesocosm water, a continuous flow-through mixture of natural river water and reverse osmosis treated water, with stock salt solutions prepared from 1) a mixture of sodium chloride and calcium chloride (Na/Cl chloride), 2) sodium bicarbonate, and 3) magnesium sulfate. The realized average specific conductance over the first 28d of continuous dosing was 827, 829, and 847 µS/cm, for the chloride, bicarbonate, and sulfate based treatments, respectively, and did not differ significantly. The controls averaged 183 µS/cm. Here we focus on comparing stream periphyton communities across treatments based on measurements obtained from a Pulse-Amplitude Modulated (PAM) fluorometer. The fluorometer is used in situ and with built in algorithms distributes the total aerial algal biomass (µg/cm2) of the periphyton among cyanobacteria, diatoms, and green algae. A measurement is recorded in a matter of seconds and, therefore, many different locations can be measured with in each mesocosm at a high return frequency. Eight locations within each of the 1 m2 (0.3 m W x 3

  20. Concentrating small particles in protoplanetary disks through the streaming instability

    Science.gov (United States)

    Yang, C.-C.; Johansen, A.; Carrera, D.

    2017-10-01

    Laboratory experiments indicate that direct growth of silicate grains via mutual collisions can only produce particles up to roughly millimeters in size. On the other hand, recent simulations of the streaming instability have shown that mm/cm-sized particles require an excessively high metallicity for dense filaments to emerge. Using a numerical algorithm for stiff mutual drag force, we perform simulations of small particles with significantly higher resolutions and longer simulation times than in previous investigations. We find that particles of dimensionless stopping time τs = 10-2 and 10-3 - representing cm- and mm-sized particles interior of the water ice line - concentrate themselves via the streaming instability at a solid abundance of a few percent. We thus revise a previously published critical solid abundance curve for the regime of τs ≪ 1. The solid density in the concentrated regions reaches values higher than the Roche density, indicating that direct collapse of particles down to mm sizes into planetesimals is possible. Our results hence bridge the gap in particle size between direct dust growth limited by bouncing and the streaming instability.

  1. Disrupting the Dissertation: Linked Data, Enhanced Publication and Algorithmic Culture

    Science.gov (United States)

    Tracy, Frances; Carmichael, Patrick

    2017-01-01

    This article explores how the three aspects of Striphas' notion of algorithmic culture (information, crowds and algorithms) might influence and potentially disrupt established educational practices. We draw on our experience of introducing semantic web and linked data technologies into higher education settings, focussing on extended student…

  2. ANOTHER LOOK AT THE EASTERN BANDED STRUCTURE: A STELLAR DEBRIS STREAM AND A POSSIBLE PROGENITOR

    International Nuclear Information System (INIS)

    Grillmair, C. J.

    2011-01-01

    Using the Sloan Digital Sky Survey Data Release 7, we re-examine the Eastern Banded Structure (EBS), a stellar debris stream first discovered in Data Release 5 and more recently detected in velocity space by Schlaufman et al. The visible portion of the stream is 18 0 long, lying roughly in the Galactic Anticenter direction and extending from Hydra to Cancer. At an estimated distance of 9.7 kpc, the stream is ∼170 pc across on the sky. The curvature of the stream implies a fairly eccentric box orbit that passes close to both the Galactic center and to the Sun, making it dynamically distinct from the nearby Monoceros, Anticenter, and GD-1 streams. Within the stream is a relatively strong, 2 0 -wide concentration of stars with a very similar color-magnitude distribution that we designate Hydra I. Given its prominence within the stream and its unusual morphology, we suggest that Hydra I is the last vestige of EBS's progenitor, possibly already unbound or in the final throes of tidal dissolution. Though both Hydra I and the EBS have a relatively high-velocity dispersion, given the comparatively narrow width of the stream and the high frequency of encounters with the bulge and massive constituents of the disk that such an eccentric orbit would entail, we suggest that the progenitor was likely a globular cluster and that both it and the stream have undergone significant heating over time.

  3. DYNAMIC ESTIMATION FOR PARAMETERS OF INTERFERENCE SIGNALS BY THE SECOND ORDER EXTENDED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    P. A. Ermolaev

    2014-03-01

    Full Text Available Data processing in the interferometer systems requires high-resolution and high-speed algorithms. Recurrence algorithms based on parametric representation of signals execute consequent processing of signal samples. In some cases recurrence algorithms make it possible to increase speed and quality of data processing as compared with classic processing methods. Dependence of the measured interferometer signal on parameters of its model and stochastic nature of noise formation in the system is, in general, nonlinear. The usage of nonlinear stochastic filtering algorithms is expedient for such signals processing. Extended Kalman filter with linearization of state and output equations by the first vector parameters derivatives is an example of these algorithms. To decrease approximation error of this method the second order extended Kalman filtering is suggested with additionally usage of the second vector parameters derivatives of model equations. Examples of algorithm implementation with the different sets of estimated parameters are described. The proposed algorithm gives the possibility to increase the quality of data processing in interferometer systems in which signals are forming according to considered models. Obtained standard deviation of estimated amplitude envelope does not exceed 4% of the maximum. It is shown that signal-to-noise ratio of reconstructed signal is increased by 60%.

  4. Simultaneous and semi-alternating projection algorithms for solving split equality problems.

    Science.gov (United States)

    Dong, Qiao-Li; Jiang, Dan

    2018-01-01

    In this article, we first introduce two simultaneous projection algorithms for solving the split equality problem by using a new choice of the stepsize, and then propose two semi-alternating projection algorithms. The weak convergence of the proposed algorithms is analyzed under standard conditions. As applications, we extend the results to solve the split feasibility problem. Finally, a numerical example is presented to illustrate the efficiency and advantage of the proposed algorithms.

  5. Evolving temporal association rules with genetic algorithms

    OpenAIRE

    Matthews, Stephen G.; Gongora, Mario A.; Hopgood, Adrian A.

    2010-01-01

    A novel framework for mining temporal association rules by discovering itemsets with a genetic algorithm is introduced. Metaheuristics have been applied to association rule mining, we show the efficacy of extending this to another variant - temporal association rule mining. Our framework is an enhancement to existing temporal association rule mining methods as it employs a genetic algorithm to simultaneously search the rule space and temporal space. A methodology for validating the ability of...

  6. Changing Consumption Behavior of Net Generation and the Adoption of Streaming Music Services : Extending the Technology Acceptance Model to Account for Streaming Music Services

    OpenAIRE

    Delikan, Mehmet Deniz

    2010-01-01

    The rise of the streaming music services and the decreasing importance of physical distribution is an inevitable change that the industry has been facing, which is resulting from the so-called internet revolution over the past few years. Through years, the music business has already shifted to online platform with the birth of file sharing. Today, a generation who had grown up digital came to age. Members of this generation have different consumption habits than before, and have different mot...

  7. A new algorithm for extended nonequilibrium molecular dynamics simulations of mixed flow

    NARCIS (Netherlands)

    Hunt, T.A.; Hunt, Thomas A.; Bernardi, Stefano; Todd, B.D.

    2010-01-01

    In this work, we develop a new algorithm for nonequilibrium molecular dynamics of fluids under planar mixed flow, a linear combination of planar elongational flow and planar Couette flow. To date, the only way of simulating mixed flow using nonequilibrium molecular dynamics techniques was to impose

  8. Comparative Analysis of Rank Aggregation Techniques for Metasearch Using Genetic Algorithm

    Science.gov (United States)

    Kaur, Parneet; Singh, Manpreet; Singh Josan, Gurpreet

    2017-01-01

    Rank Aggregation techniques have found wide applications for metasearch along with other streams such as Sports, Voting System, Stock Markets, and Reduction in Spam. This paper presents the optimization of rank lists for web queries put by the user on different MetaSearch engines. A metaheuristic approach such as Genetic algorithm based rank…

  9. AN EXTENDED SPECTRAL–SPATIAL CLASSIFICATION APPROACH FOR HYPERSPECTRAL DATA

    Directory of Open Access Journals (Sweden)

    D. Akbari

    2017-11-01

    Full Text Available In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1 unsupervised feature extraction methods including principal component analysis (PCA, independent component analysis (ICA, and minimum noise fraction (MNF; (2 supervised feature extraction including decision boundary feature extraction (DBFE, discriminate analysis feature extraction (DAFE, and nonparametric weighted feature extraction (NWFE; (3 genetic algorithm (GA. The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  10. Autonomic intrusion detection: Adaptively detecting anomalies over unlabeled audit data streams in computer networks

    KAUST Repository

    Wang, Wei; Guyet, Thomas; Quiniou, René ; Cordier, Marie-Odile; Masseglia, Florent; Zhang, Xiangliang

    2014-01-01

    In this work, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-managing: self-labeling, self-updating and self-adapting. Our framework employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies. Two large real HTTP traffic streams collected in our institute as well as a set of benchmark KDD’99 data are used to validate the framework and the method. The test results show that the autonomic model achieves better results in terms of effectiveness and efficiency compared to adaptive Sequential Karhunen–Loeve method and static AP as well as three other static anomaly detection methods, namely, k-NN, PCA and SVM.

  11. Autonomic intrusion detection: Adaptively detecting anomalies over unlabeled audit data streams in computer networks

    KAUST Repository

    Wang, Wei

    2014-06-22

    In this work, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-managing: self-labeling, self-updating and self-adapting. Our framework employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies. Two large real HTTP traffic streams collected in our institute as well as a set of benchmark KDD’99 data are used to validate the framework and the method. The test results show that the autonomic model achieves better results in terms of effectiveness and efficiency compared to adaptive Sequential Karhunen–Loeve method and static AP as well as three other static anomaly detection methods, namely, k-NN, PCA and SVM.

  12. Sparse Variational Bayesian SAGE Algorithm With Application to the Estimation of Multipath Wireless Channels

    DEFF Research Database (Denmark)

    Shutin, Dmitriy; Fleury, Bernard Henri

    2011-01-01

    In this paper, we develop a sparse variational Bayesian (VB) extension of the space-alternating generalized expectation-maximization (SAGE) algorithm for the high resolution estimation of the parameters of relevant multipath components in the response of frequency and spatially selective wireless...... channels. The application context of the algorithm considered in this contribution is parameter estimation from channel sounding measurements for radio channel modeling purpose. The new sparse VB-SAGE algorithm extends the classical SAGE algorithm in two respects: i) by monotonically minimizing...... parametric sparsity priors for the weights of the multipath components. We revisit the Gaussian sparsity priors within the sparse VB-SAGE framework and extend the results by considering Laplace priors. The structure of the VB-SAGE algorithm allows for an analytical stability analysis of the update expression...

  13. Adaptive sampling algorithm for detection of superpoints

    Institute of Scientific and Technical Information of China (English)

    CHENG Guang; GONG Jian; DING Wei; WU Hua; QIANG ShiQiang

    2008-01-01

    The superpoints are the sources (or the destinations) that connect with a great deal of destinations (or sources) during a measurement time interval, so detecting the superpoints in real time is very important to network security and management. Previous algorithms are not able to control the usage of the memory and to deliver the desired accuracy, so it is hard to detect the superpoints on a high speed link in real time. In this paper, we propose an adaptive sampling algorithm to detect the superpoints in real time, which uses a flow sample and hold module to reduce the detection of the non-superpoints and to improve the measurement accuracy of the superpoints. We also design a data stream structure to maintain the flow records, which compensates for the flow Hash collisions statistically. An adaptive process based on different sampling probabilities is used to maintain the recorded IP ad dresses in the limited memory. This algorithm is compared with the other algo rithms by analyzing the real network trace data. Experiment results and mathematic analysis show that this algorithm has the advantages of both the limited memory requirement and high measurement accuracy.

  14. An Improved Biclustering Algorithm and Its Application to Gene Expression Spectrum Analysis

    OpenAIRE

    Qu, Hua; Wang, Liu-Pu; Liang, Yan-Chun; Wu, Chun-Guo

    2016-01-01

    Cheng and Church algorithm is an important approach in biclustering algorithms. In this paper, the process of the extended space in the second stage of Cheng and Church algorithm is improved and the selections of two important parameters are discussed. The results of the improved algorithm used in the gene expression spectrum analysis show that, compared with Cheng and Church algorithm, the quality of clustering results is enhanced obviously, the mining expression models are better, and the d...

  15. StreamCat

    Data.gov (United States)

    U.S. Environmental Protection Agency — The StreamCat Dataset provides summaries of natural and anthropogenic landscape features for ~2.65 million streams, and their associated catchments, within the...

  16. Stream Crossings

    Data.gov (United States)

    Vermont Center for Geographic Information — Physical measurements and attributes of stream crossing structures and adjacent stream reaches which are used to provide a relative rating of aquatic organism...

  17. Next Generation Suspension Dynamics Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  18. Sorting method to extend the dynamic range of the Shack-Hartmann wave-front sensor

    International Nuclear Information System (INIS)

    Lee, Junwon; Shack, Roland V.; Descour, Michael R.

    2005-01-01

    We propose a simple and powerful algorithm to extend the dynamic range of a Shack-Hartmann wave-front sensor. In a conventional Shack-Hartmann wave-front sensor the dynamic range is limited by the f-number of a lenslet, because the focal spot is required to remain in the area confined by the single lenslet. The sorting method proposed here eliminates such a limitation and extends the dynamic range by tagging each spot in a special sequence. Since the sorting method is a simple algorithm that does not change the measurement configuration, there is no requirement for extra hardware, multiple measurements, or complicated algorithms. We not only present the theory and a calculation example of the sorting method but also actually implement measurement of a highly aberrated wave front from nonrotational symmetric optics

  19. Frequent Pattern Mining Algorithms for Data Clustering

    DEFF Research Database (Denmark)

    Zimek, Arthur; Assent, Ira; Vreeken, Jilles

    2014-01-01

    that frequent pattern mining was at the cradle of subspace clustering—yet, it quickly developed into an independent research field. In this chapter, we discuss how frequent pattern mining algorithms have been extended and generalized towards the discovery of local clusters in high-dimensional data......Discovering clusters in subspaces, or subspace clustering and related clustering paradigms, is a research field where we find many frequent pattern mining related influences. In fact, as the first algorithms for subspace clustering were based on frequent pattern mining algorithms, it is fair to say....... In particular, we discuss several example algorithms for subspace clustering or projected clustering as well as point out recent research questions and open topics in this area relevant to researchers in either clustering or pattern mining...

  20. Quantum Algorithms for Compositional Natural Language Processing

    Directory of Open Access Journals (Sweden)

    William Zeng

    2016-08-01

    Full Text Available We propose a new application of quantum computing to the field of natural language processing. Ongoing work in this field attempts to incorporate grammatical structure into algorithms that compute meaning. In (Coecke, Sadrzadeh and Clark, 2010, the authors introduce such a model (the CSC model based on tensor product composition. While this algorithm has many advantages, its implementation is hampered by the large classical computational resources that it requires. In this work we show how computational shortcomings of the CSC approach could be resolved using quantum computation (possibly in addition to existing techniques for dimension reduction. We address the value of quantum RAM (Giovannetti,2008 for this model and extend an algorithm from Wiebe, Braun and Lloyd (2012 into a quantum algorithm to categorize sentences in CSC. Our new algorithm demonstrates a quadratic speedup over classical methods under certain conditions.

  1. DEFINITION AND ANALYSIS OF MOTION ACTIVITY AFTER-STROKE PATIENT FROM THE VIDEO STREAM

    Directory of Open Access Journals (Sweden)

    M. Yu. Katayev

    2014-01-01

    Full Text Available This article describes an approach to the assessment of motion activity of man in after-stroke period, allowing the doctor to get new information to give a more informed recommendations on rehabilitation treatment than in traditional approaches. Consider description of the hardware-software complex for determination and analysis of motion activity after-stroke patient for the video stream. The article provides a description of the complex, its algorithmic filling and the results of the work on the example of processing of the actual data. The algorithms and technology to significantly accelerate the gait analysis and improve the quality of diagnostics post-stroke patients.

  2. Clustering for Binary Data Sets by Using Genetic Algorithm-Incremental K-means

    Science.gov (United States)

    Saharan, S.; Baragona, R.; Nor, M. E.; Salleh, R. M.; Asrah, N. M.

    2018-04-01

    This research was initially driven by the lack of clustering algorithms that specifically focus in binary data. To overcome this gap in knowledge, a promising technique for analysing this type of data became the main subject in this research, namely Genetic Algorithms (GA). For the purpose of this research, GA was combined with the Incremental K-means (IKM) algorithm to cluster the binary data streams. In GAIKM, the objective function was based on a few sufficient statistics that may be easily and quickly calculated on binary numbers. The implementation of IKM will give an advantage in terms of fast convergence. The results show that GAIKM is an efficient and effective new clustering algorithm compared to the clustering algorithms and to the IKM itself. In conclusion, the GAIKM outperformed other clustering algorithms such as GCUK, IKM, Scalable K-means (SKM) and K-means clustering and paves the way for future research involving missing data and outliers.

  3. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu

    2008-07-01

    Full Text Available This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  4. Advanced algorithms for information science

    International Nuclear Information System (INIS)

    Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

    1998-01-01

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression

  5. Advanced algorithms for information science

    Energy Technology Data Exchange (ETDEWEB)

    Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.

  6. Multivariate anomaly detection for Earth observations: a comparison of algorithms and feature extraction techniques

    Directory of Open Access Journals (Sweden)

    M. Flach

    2017-08-01

    Full Text Available Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach and their combinations (ensembles that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to

  7. Improving the Interpretability of Classification Rules Discovered by an Ant Colony Algorithm: Extended Results

    OpenAIRE

    Otero, Fernando E.B.; Freitas, Alex A.

    2016-01-01

    The vast majority of Ant Colony Optimization (ACO) algorithms for inducing classification rules use an ACO-based procedure to create a rule in an one-at-a-time fashion. An improved search strategy has been proposed in the cAnt-MinerPB algorithm, where an ACO-based procedure is used to create a complete list of rules (ordered rules)-i.e., the ACO search is guided by the quality of a list of rules, instead of an individual rule. In this paper we propose an extension of the cAnt-MinerPB algorith...

  8. Prioritized Contact Transport Stream

    Science.gov (United States)

    Hunt, Walter Lee, Jr. (Inventor)

    2015-01-01

    A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters.

  9. Re-Meandering of Lowland Streams

    DEFF Research Database (Denmark)

    Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai

    2014-01-01

    We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats...... and macroinvertebrate communities of restored streams would resemble those of natural streams, while those of the channelized streams would differ from both restored and near-natural streams. Physical habitats were surveyed for substrate composition, depth, width and current velocity. Macroinvertebrates were sampled...... along 100 m reaches in each stream, in edge habitats and in riffle/run habitats located in the center of the stream. Restoration significantly altered the physical conditions and affected the interactions between stream habitat heterogeneity and macroinvertebrate diversity. The substrate in the restored...

  10. StreamExplorer: A Multi-Stage System for Visually Exploring Events in Social Streams.

    Science.gov (United States)

    Wu, Yingcai; Chen, Zhutian; Sun, Guodao; Xie, Xiao; Cao, Nan; Liu, Shixia; Cui, Weiwei

    2017-10-18

    Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present StreamExplorer to facilitate the visual analysis, tracking, and comparison of a social stream at three levels. At a macroscopic level, StreamExplorer uses a new glyph-based timeline visualization, which presents a quick multi-faceted overview of the ebb and flow of a social stream. At a mesoscopic level, a map visualization is employed to visually summarize the social stream from either a topical or geographical aspect. At a microscopic level, users can employ interactive lenses to visually examine and explore the social stream from different perspectives. Two case studies and a task-based evaluation are used to demonstrate the effectiveness and usefulness of StreamExplorer.Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present Stream

  11. Implementation of a parallel algorithm for spherical SN calculations on the IBM 3090

    International Nuclear Information System (INIS)

    Haghighat, A.; Lawrence, R.D.

    1989-01-01

    Parallel S N algorithms based on domain decomposition in angle are straightforward to develop in Cartesian geometry because the computation of the angular fluxes for a specific discrete ordinate can be performed independently of all other angles. This is not the case for curvilinear geometries, where the angular redistribution component of the discretized streaming operator results in coupling between angular fluxes along adjacent discrete ordinates. Previously, the authors developed a parallel algorithm for S N calculations in spherical geometry and examined its iterative convergence for criticality and detector problems with differing scattering/absorption ratios. In this paper, the authors describe the implementation of the algorithm on an IBM 3090 Model 400 (four processors) and present computational results illustrating the efficiency of the algorithm relative to serial execution

  12. Using transformation algorithms to estimate (co)variance ...

    African Journals Online (AJOL)

    REML) procedures by a diagonalization approach is extended to multiple traits by the use of canonical transformations. A computing strategy is developed for use on large data sets employing two different REML algorithms for the estimation of ...

  13. An application of the discrete-time Toda lattice to the progressive algorithm by Lanczos and related problems

    Science.gov (United States)

    Nakamura, Yoshimasa; Sekido, Hiroto

    2018-04-01

    The finite or the semi-infinite discrete-time Toda lattice has many applications to various areas in applied mathematics. The purpose of this paper is to review how the Toda lattice appears in the Lanczos algorithm through the quotient-difference algorithm and its progressive form (pqd). Then a multistep progressive algorithm (MPA) for solving linear systems is presented. The extended Lanczos parameters can be given not by computing inner products of the extended Lanczos vectors but by using the pqd algorithm with highly relative accuracy in a lower cost. The asymptotic behavior of the pqd algorithm brings us some applications of MPA related to eigenvectors.

  14. Fast processing of microscopic images using object-based extended depth of field.

    Science.gov (United States)

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades

    2016-12-22

    Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This

  15. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    Science.gov (United States)

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  16. Algorithms and Data Structures for Strings, Points and Integers

    DEFF Research Database (Denmark)

    Vind, Søren Juhl

    a string under a compression scheme that can achieve better than entropy compression. We also give improved results for the substring concatenation problem, and an extension of our structure can be used as a black box to get an improved solution to the previously studied dynamic text static pattern problem....... Compressed Pattern Matching. In the streaming model, input data flows past a client one item at a time, but is far too large for the client to store. The annotated streaming model extends the model by introducing a powerful but untrusted annotator (representing “the cloud”) that can annotate input elements...... with additional information, sent as one-way communication to the client. We generalize the annotated streaming model to be able to solve problems on strings and present a data structure that allows us to trade off client space and annotation size. This lets us exploit the power of the annotator. In compressed...

  17. Modified SURF Algorithm Implementation on FPGA For Real-Time Object Tracking

    Directory of Open Access Journals (Sweden)

    Tomyslav Sledevič

    2013-05-01

    Full Text Available The paper describes the FPGA-based implementation of the modified speeded-up robust features (SURF algorithm. FPGA was selected for parallel process implementation using VHDL to ensure features extraction in real-time. A sliding 84×84 size window was used to store integral pixels and accelerate Hessian determinant calculation, orientation assignment and descriptor estimation. The local extreme searching was used to find point of interest in 8 scales. The simplified descriptor and orientation vector were calculated in parallel in 6 scales. The algorithm was investigated by tracking marker and drawing a plane or cube. All parts of algorithm worked on 25 MHz clock. The video stream was generated using 60 fps and 640×480 pixel camera.Article in Lithuanian

  18. Extended Sleeve Products Allow Control and Monitoring of Process Fluid Flows Inside Shielding, Behind Walls and Beneath Floors - 13041

    Energy Technology Data Exchange (ETDEWEB)

    Abbott, Mark W. [Flowserve Corporation, 1978 Foreman Drive Cookeville, TN 38506 (United States)

    2013-07-01

    Throughout power generation, delivery and waste remediation, the ability to control process streams in difficult or impossible locations becomes increasingly necessary as the complexity of processes increases. Example applications include radioactive environments, inside concrete installations, buried in dirt, or inside a shielded or insulated pipe. In these situations, it is necessary to implement innovative solutions to tackle such issues as valve maintenance, valve control from remote locations, equipment cleaning in hazardous environments, and flow stream analysis. The Extended Sleeve family of products provides a scalable solution to tackle some of the most challenging applications in hazardous environments which require flow stream control and monitoring. The Extended Sleeve family of products is defined in three groups: Extended Sleeve (ESV), Extended Bonnet (EBV) and Instrument Enclosure (IE). Each of the products provides a variation on the same requirements: to provide access to the internals of a valve, or to monitor the fluid passing through the pipeline through shielding around the process pipe. The shielding can be as simple as a grout filled pipe covering a process pipe or as complex as a concrete deck protecting a room in which the valves and pipes pass through at varying elevations. Extended Sleeves are available between roughly 30 inches and 18 feet of distance between the pipeline centerline and the top of the surface to which it mounts. The Extended Sleeve provides features such as ± 1.5 inches of adjustment between the pipeline and deck location, internal flush capabilities, automatic alignment of the internal components during assembly and integrated actuator mounting pads. The Extended Bonnet is a shorter fixed height version of the Extended Sleeve which has a removable deck flange to facilitate installation through walls, and is delivered fully assembled. The Instrument Enclosure utilizes many of the same components as an Extended Sleeve

  19. Extended-Search, Bézier Curve-Based Lane Detection and Reconstruction System for an Intelligent Vehicle

    Directory of Open Access Journals (Sweden)

    Xiaoyun Huang

    2015-09-01

    Full Text Available To improve the real-time performance and detection rate of a Lane Detection and Reconstruction (LDR system, an extended-search-based lane detection method and a Bézier curve-based lane reconstruction algorithm are proposed in this paper. The extended-search-based lane detection method is designed to search boundary blocks from the initial position, in an upwards direction and along the lane, with small search areas including continuous search, discontinuous search and bending search in order to detect different lane boundaries. The Bézier curve-based lane reconstruction algorithm is employed to describe a wide range of lane boundary forms with comparatively simple expressions. In addition, two Bézier curves are adopted to reconstruct the lanes' outer boundaries with large curvature variation. The lane detection and reconstruction algorithm — including initial-blocks' determining, extended search, binarization processing and lane boundaries' fitting in different scenarios — is verified in road tests. The results show that this algorithm is robust against different shadows and illumination variations; the average processing time per frame is 13 ms. Significantly, it presents an 88.6% high-detection rate on curved lanes with large or variable curvatures, where the accident rate is higher than that of straight lanes.

  20. Quantifying Forested Riparian Buffer Ability to Ameliorate Stream Temperature in a Missouri Ozark Border Stream of the Central U.S

    Science.gov (United States)

    Bulliner, E. A.; Hubbart, J. A.

    2009-12-01

    Riparian buffers play an important role in modulating stream water quality, including temperature. There is a need to better understand riparian form and function to validate and improve contemporary management practices. Further studies are warranted to characterize energy attenuation by forested riparian canopy layers that normally buffer stream temperature, particularly in the central hardwood forest regions of the United States where relationships between canopy density and stream temperature are unknown. To quantify these complex processes, two intensively instrumented hydroclimate stations were installed along two stream reaches of a riparian stream in central Missouri, USA in the winter of 2008. Hydroclimate stations are located along stream reaches oriented in both cardinal directions, which will allow interpolation of results to other orientations. Each station consists of an array of instrumentation that senses the flux of water and energy into and out of the riparian zone. Reference data are supplied from a nearby flux tower (US DOE) located on top of a forested ridge. The study sites are located within a University of Missouri preserved wildland area on the border of the southern Missouri’s Ozark region, an ecologically distinct region in the central United States. Limestone underlies the study area, resulting in a distinct semi-Karst hydrologic system. Vegetation forms a complex, multi-layered canopy extending from the stream edge through the riparian zone and into surrounding hills. Climate is classified as humid continental, with approximate average annual temperature and precipitation of 13.2°C and 970mm, respectively. Preliminary results (summer 2009 data) indicate incoming short-wave radiation is 24.9% higher at the N-S oriented stream reach relative to the E-W oriented reach. Maximum incoming short wave radiation during the period was 64.5% lower at the N-S reach relative to E-W reach. Average air temperature for the E-W reach was 0.3°C lower

  1. Faucet: streaming de novo assembly graph construction.

    Science.gov (United States)

    Rozov, Roye; Goldshlager, Gil; Halperin, Eran; Shamir, Ron

    2018-01-01

    We present Faucet, a two-pass streaming algorithm for assembly graph construction. Faucet builds an assembly graph incrementally as each read is processed. Thus, reads need not be stored locally, as they can be processed while downloading data and then discarded. We demonstrate this functionality by performing streaming graph assembly of publicly available data, and observe that the ratio of disk use to raw data size decreases as coverage is increased. Faucet pairs the de Bruijn graph obtained from the reads with additional meta-data derived from them. We show these metadata-coverage counts collected at junction k-mers and connections bridging between junction pairs-contain most salient information needed for assembly, and demonstrate they enable cleaning of metagenome assembly graphs, greatly improving contiguity while maintaining accuracy. We compared Fauceted resource use and assembly quality to state of the art metagenome assemblers, as well as leading resource-efficient genome assemblers. Faucet used orders of magnitude less time and disk space than the specialized metagenome assemblers MetaSPAdes and Megahit, while also improving on their memory use; this broadly matched performance of other assemblers optimizing resource efficiency-namely, Minia and LightAssembler. However, on metagenomes tested, Faucet,o outputs had 14-110% higher mean NGA50 lengths compared with Minia, and 2- to 11-fold higher mean NGA50 lengths compared with LightAssembler, the only other streaming assembler available. Faucet is available at https://github.com/Shamir-Lab/Faucet. rshamir@tau.ac.il or eranhalperin@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  2. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  3. On the Cooley-Turkey Fast Fourier algorithm for arbitrary factors ...

    African Journals Online (AJOL)

    Atonuje and Okonta in [1] developed the Cooley-Turkey Fast Fourier transform algorithm and its application to the Fourier transform of discretely sampled data points N, expressed in terms of a power y of 2. In this paper, we extend the formalism of [1] Cookey-Turkey Fast Fourier transform algorithm. The method is developed ...

  4. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    Science.gov (United States)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  5. Conjugate gradient algorithms using multiple recursions

    Energy Technology Data Exchange (ETDEWEB)

    Barth, T.; Manteuffel, T.

    1996-12-31

    Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.

  6. libgapmis: extending short-read alignments.

    Science.gov (United States)

    Alachiotis, Nikolaos; Berger, Simon; Flouri, Tomáš; Pissis, Solon P; Stamatakis, Alexandros

    2013-01-01

    A wide variety of short-read alignment programmes have been published recently to tackle the problem of mapping millions of short reads to a reference genome, focusing on different aspects of the procedure such as time and memory efficiency, sensitivity, and accuracy. These tools allow for a small number of mismatches in the alignment; however, their ability to allow for gaps varies greatly, with many performing poorly or not allowing them at all. The seed-and-extend strategy is applied in most short-read alignment programmes. After aligning a substring of the reference sequence against the high-quality prefix of a short read--the seed--an important problem is to find the best possible alignment between a substring of the reference sequence succeeding and the remaining suffix of low quality of the read--extend. The fact that the reads are rather short and that the gap occurrence frequency observed in various studies is rather low suggest that aligning (parts of) those reads with a single gap is in fact desirable. In this article, we present libgapmis, a library for extending pairwise short-read alignments. Apart from the standard CPU version, it includes ultrafast SSE- and GPU-based implementations. libgapmis is based on an algorithm computing a modified version of the traditional dynamic-programming matrix for sequence alignment. Extensive experimental results demonstrate that the functions of the CPU version provided in this library accelerate the computations by a factor of 20 compared to other programmes. The analogous SSE- and GPU-based implementations accelerate the computations by a factor of 6 and 11, respectively, compared to the CPU version. The library also provides the user the flexibility to split the read into fragments, based on the observed gap occurrence frequency and the length of the read, thereby allowing for a variable, but bounded, number of gaps in the alignment. We present libgapmis, a library for extending pairwise short-read alignments. We

  7. Supervised classification of distributed data streams for smart grids

    Energy Technology Data Exchange (ETDEWEB)

    Guarracino, Mario R. [High Performance Computing and Networking - National Research Council of Italy, Naples (Italy); Irpino, Antonio; Verde, Rosanna [Seconda Universita degli Studi di Napoli, Dipartimento di Studi Europei e Mediterranei, Caserta (Italy); Radziukyniene, Neringa [Lithuanian Energy Institute, Laboratory of Systems Control and Automation, Kaunas (Lithuania)

    2012-03-15

    The electricity system inherited from the 19th and 20th centuries has been a reliable but centralized system. With the spreading of local, distributed and intermittent renewable energy resources, top-down central control of the grid no longer meets modern requirements. For these reasons, the power grid has been equipped with smart meters integrating bi-directional communications, advanced power measurement and management capabilities. Smart meters make it possible to remotely turn power on or off to a customer, read usage information, detect a service outage and the unauthorized use of electricity. To fully exploit their capabilities, we foresee the usage of distributed supervised classification algorithms. By gathering data available from meters and other sensors, such algorithms can create local classification models for attack detection, online monitoring, privacy preservation, workload balancing, prediction of energy demand and incoming faults. In this paper we present a decentralized distributed classification algorithm based on proximal support vector machines. The method uses partial knowledge, in form of data streams, to build its local model on each meter. We demonstrate the performance of the proposed scheme on synthetic datasets. (orig.)

  8. Benthic invertebrate fauna, small streams

    Science.gov (United States)

    J. Bruce Wallace; S.L. Eggert

    2009-01-01

    Small streams (first- through third-order streams) make up >98% of the total number of stream segments and >86% of stream length in many drainage networks. Small streams occur over a wide array of climates, geology, and biomes, which influence temperature, hydrologic regimes, water chemistry, light, substrate, stream permanence, a basin's terrestrial plant...

  9. Gene selection heuristic algorithm for nutrigenomics studies.

    Science.gov (United States)

    Valour, D; Hue, I; Grimard, B; Valour, B

    2013-07-15

    Large datasets from -omics studies need to be deeply investigated. The aim of this paper is to provide a new method (LEM method) for the search of transcriptome and metabolome connections. The heuristic algorithm here described extends the classical canonical correlation analysis (CCA) to a high number of variables (without regularization) and combines well-conditioning and fast-computing in "R." Reduced CCA models are summarized in PageRank matrices, the product of which gives a stochastic matrix that resumes the self-avoiding walk covered by the algorithm. Then, a homogeneous Markov process applied to this stochastic matrix converges the probabilities of interconnection between genes, providing a selection of disjointed subsets of genes. This is an alternative to regularized generalized CCA for the determination of blocks within the structure matrix. Each gene subset is thus linked to the whole metabolic or clinical dataset that represents the biological phenotype of interest. Moreover, this selection process reaches the aim of biologists who often need small sets of genes for further validation or extended phenotyping. The algorithm is shown to work efficiently on three published datasets, resulting in meaningfully broadened gene networks.

  10. FPGA helix tracking algorithm for PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Yutie; Galuska, Martin; Gessler, Thomas; Hu, Jifeng; Kuehn, Wolfgang; Lange, Jens Soeren; Muenchow, David; Spruck, Bjoern [II. Physikalisches, Giessen University (Germany); Ye, Hua [II. Physikalisches, Giessen University (Germany); Institute of High Energy Physics, Beijing (China); Collaboration: PANDA-Collaboration

    2014-07-01

    The PANDA detector is a general-purpose detector for physics with high luminosity cooled antiproton beams, planed to operate at the FAIR facility in Darmstadt, Germany. The central detector includes a silicon Micro Vertex Detector (MVD) and a Straw Tube Tracker (STT). Without any hardware trigger, large amounts of raw data are streaming into the data acquisition system. The data reduction task is performed in the online system by reconstruction algorithms programmed in VHDL (Very High Speed Integrated Circuit Hardware Description Language) on FPGAs (Field Programmable Gate Arrays) as first level and on a farm of GPUs or PCs as a second level. One important part in the system is the online track reconstruction. In this presentation, an online tracking finding algorithm for helix track reconstruction in the solenoidal field is shown. A performance study using C++ and the status of the VHDL implementation are presented.

  11. Inventory of miscellaneous streams

    International Nuclear Information System (INIS)

    Lueck, K.J.

    1995-09-01

    On December 23, 1991, the US Department of Energy, Richland Operations Office (RL) and the Washington State Department of Ecology (Ecology) agreed to adhere to the provisions of the Department of Ecology Consent Order. The Consent Order lists the regulatory milestones for liquid effluent streams at the Hanford Site to comply with the permitting requirements of Washington Administrative Code. The RL provided the US Congress a Plan and Schedule to discontinue disposal of contaminated liquid effluent into the soil column on the Hanford Site. The plan and schedule document contained a strategy for the implementation of alternative treatment and disposal systems. This strategy included prioritizing the streams into two phases. The Phase 1 streams were considered to be higher priority than the Phase 2 streams. The actions recommended for the Phase 1 and 2 streams in the two reports were incorporated in the Hanford Federal Facility Agreement and Consent Order. Miscellaneous Streams are those liquid effluents streams identified within the Consent Order that are discharged to the ground but are not categorized as Phase 1 or Phase 2 Streams. This document consists of an inventory of the liquid effluent streams being discharged into the Hanford soil column

  12. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    Science.gov (United States)

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.

  13. Streaming Multiframe Deconvolutions on GPUs

    Science.gov (United States)

    Lee, M. A.; Budavári, T.

    2015-09-01

    Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.

  14. An implementation of super-encryption using RC4A and MDTM cipher algorithms for securing PDF Files on android

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.; Parlindungan, M. R.

    2018-03-01

    MDTM is a classical symmetric cryptographic algorithm. As with other classical algorithms, the MDTM Cipher algorithm is easy to implement but it is less secure compared to modern symmetric algorithms. In order to make it more secure, a stream cipher RC4A is added and thus the cryptosystem becomes super encryption. In this process, plaintexts derived from PDFs are firstly encrypted with the MDTM Cipher algorithm and are encrypted once more with the RC4A algorithm. The test results show that the value of complexity is Θ(n2) and the running time is linearly directly proportional to the length of plaintext characters and the keys entered.

  15. Effects of landscape features on population genetic variation of a tropical stream fish, Stone lapping minnow, Garra cambodgiensis, in the upper Nan River drainage basin, northern Thailand

    Directory of Open Access Journals (Sweden)

    Chaowalee Jaisuk

    2018-03-01

    Full Text Available Spatial genetic variation of river-dwelling freshwater fishes is typically affected by the historical and contemporary river landscape as well as life-history traits. Tropical river and stream landscapes have endured extended geological change, shaping the existing pattern of genetic diversity, but were not directly affected by glaciation. Thus, spatial genetic variation of tropical fish populations should look very different from the pattern observed in temperate fish populations. These data are becoming important for designing appropriate management and conservation plans, as these aquatic systems are undergoing intense development and exploitation. This study evaluated the effects of landscape features on population genetic diversity of Garra cambodgiensis, a stream cyprinid, in eight tributary streams in the upper Nan River drainage basin (n = 30–100 individuals/location, Nan Province, Thailand. These populations are under intense fishing pressure from local communities. Based on 11 microsatellite loci, we detected moderate genetic diversity within eight population samples (average number of alleles per locus = 10.99 ± 3.00; allelic richness = 10.12 ± 2.44. Allelic richness within samples and stream order of the sampling location were negatively correlated (P < 0.05. We did not detect recent bottleneck events in these populations, but we did detect genetic divergence among populations (Global FST = 0.022, P < 0.01. The Bayesian clustering algorithms (TESS and STRUCTURE suggested that four to five genetic clusters roughly coincide with sub-basins: (1 headwater streams/main stem of the Nan River, (2 a middle tributary, (3 a southeastern tributary and (4 a southwestern tributary. We observed positive correlation between geographic distance and linearized FST (P < 0.05, and the genetic differentiation pattern can be moderately explained by the contemporary stream network (STREAMTREE analysis, R2 = 0.75. The MEMGENE analysis

  16. The Reach-and-Evolve Algorithm for Reachability Analysis of Nonlinear Dynamical Systems

    NARCIS (Netherlands)

    P.J. Collins (Pieter); A. Goldsztejn

    2008-01-01

    htmlabstractThis paper introduces a new algorithm dedicated to the rigorous reachability analysis of nonlinear dynamical systems. The algorithm is initially presented in the context of discrete time dynamical systems, and then extended to continuous time dynamical systems driven by ODEs. In

  17. Time-Based Data Streams: Fundamental Concepts for a Data Resource for Streams

    Energy Technology Data Exchange (ETDEWEB)

    Beth A. Plale

    2009-10-10

    Real time data, which we call data streams, are readings from instruments, environmental, bodily or building sensors that are generated at regular intervals and often, due to their volume, need to be processed in real time. Often a single pass is all that can be made on the data, and a decision to discard or keep the instance is made on the spot. Too, the stream is for all practical purposes indefinite, so decisions must be made on incomplete knowledge. This notion of data streams has a different set of issues from a file, for instance, that is byte streamed to a reader. The file is finite, so the byte stream is becomes a processing convenience more than a fundamentally different kind of data. Through the duration of the project we examined three aspects of streaming data: the first, techniques to handle streaming data in a distributed system organized as a collection of web services, the second, the notion of the dashboard and real time controllable analysis constructs in the context of the Fermi Tevatron Beam Position Monitor, and third and finally, we examined provenance collection of stream processing such as might occur as raw observational data flows from the source and undergoes correction, cleaning, and quality control. The impact of this work is severalfold. We were one of the first to advocate that streams had little value unless aggregated, and that notion is now gaining general acceptance. We were one of the first groups to grapple with the notion of provenance of stream data also.

  18. Asteroid/meteorite streams

    Science.gov (United States)

    Drummond, J.

    The independent discovery of the same three streams (named alpha, beta, and gamma) among 139 Earth approaching asteroids and among 89 meteorite producing fireballs presents the possibility of matching specific meteorites to specific asteroids, or at least to asteroids in the same stream and, therefore, presumably of the same composition. Although perhaps of limited practical value, the three meteorites with known orbits are all ordinary chondrites. To identify, in general, the taxonomic type of the parent asteroid, however, would be of great scientific interest since these most abundant meteorite types cannot be unambiguously spectrally matched to an asteroid type. The H5 Pribram meteorite and asteroid 4486 (unclassified) are not part of a stream, but travel in fairly similar orbits. The LL5 Innisfree meteorite is orbitally similar to asteroid 1989DA (unclassified), and both are members of a fourth stream (delta) defined by five meteorite-dropping fireballs and this one asteroid. The H5 Lost City meteorite is orbitally similar to 1980AA (S type), which is a member of stream gamma defined by four asteroids and four fireballs. Another asteroid in this stream is classified as an S type, another is QU, and the fourth is unclassified. This stream suggests that ordinary chondrites should be associated with S (and/or Q) asteroids. Two of the known four V type asteroids belong to another stream, beta, defined by five asteroids and four meteorite-dropping (but unrecovered) fireballs, making it the most probable source of the eucrites. The final stream, alpha, defined by five asteroids and three fireballs is of unknown composition since no meteorites have been recovered and only one asteroid has an ambiguous classification of QRS. If this stream, or any other as yet undiscovered ones, were found to be composed of a more practical material (e.g., water or metalrich), then recovery of the associated meteorites would provide an opportunity for in-hand analysis of a potential

  19. Optimizing Extender Code for NCSX Analyses

    International Nuclear Information System (INIS)

    Richman, M.; Ethier, S.; Pomphrey, N.

    2008-01-01

    Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch

  20. Destabilization of the Northeast Greenland Ice Stream

    DEFF Research Database (Denmark)

    Korsgaard, N. J.; Khan, Shfaqat Abbas; Kjaer, K. H.

    . Here, we reveal that the Northeast Greenland Ice Stream (NEGIS), which extends more than 600 km into the interior of the ice sheet, is now undergoing dynamic thinning after more than a quarter of a century of stability. This sector of the GrIS is of particular interest in sea level projections, because...... the glacier flows into a large submarine basin with a negative bed slope near the grounding line. Our findings unfold the next step in mass loss of the GrIS as we show a heightened risk of rapid sustained loss from Northeast Greenland on top of the thinning in Southeast and Northwestern Greenland....

  1. Local Community Detection Algorithm Based on Minimal Cluster

    Directory of Open Access Journals (Sweden)

    Yong Zhou

    2016-01-01

    Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.

  2. Aeroacoustics of Three-Stream Jets

    Science.gov (United States)

    Henderson, Brenda S.

    2012-01-01

    Results from acoustic measurements of noise radiated from a heated, three-stream, co-annular exhaust system operated at subsonic conditions are presented. The experiments were conducted for a range of core, bypass, and tertiary stream temperatures and pressures. The nozzle system had a fan-to-core area ratio of 2.92 and a tertiary-to-core area ratio of 0.96. The impact of introducing a third stream on the radiated noise for third-stream velocities below that of the bypass stream was to reduce high frequency noise levels at broadside and peak jet-noise angles. Mid-frequency noise radiation at aft observation angles was impacted by the conditions of the third stream. The core velocity had the greatest impact on peak noise levels and the bypass-to-core mass flow ratio had a slight impact on levels in the peak jet-noise direction. The third-stream jet conditions had no impact on peak noise levels. Introduction of a third jet stream in the presence of a simulated forward-flight stream limits the impact of the third stream on radiated noise. For equivalent ideal thrust conditions, two-stream and three-stream jets can produce similar acoustic spectra although high-frequency noise levels tend to be lower for the three-stream jet.

  3. A method to assess longitudinal riverine connectivity in tropical streams dominated by migratory biota

    Science.gov (United States)

    Crook, K.E.; Pringle, C.M.; Freeman, Mary C.

    2009-01-01

    1. One way in which dams affect ecosystem function is by altering the distribution and abundance of aquatic species. 2. Previous studies indicate that migratory shrimps have significant effects on ecosystem processes in Puerto Rican streams, but are vulnerable to impediments to upstream or downstream passage, such as dams and associated water intakes where stream water is withdrawn for human water supplies. Ecological effects of dams and water withdrawals from streams depend on spatial context and temporal variability of flow in relation to the amount of water withdrawn. 3. This paper presents a conceptual model for estimating the probability that an individual shrimp is able to migrate from a stream's headwaters to the estuary as a larva, and then return to the headwaters as a juvenile, given a set of dams and water withdrawals in the stream network. The model is applied to flow and withdrawal data for a set of dams and water withdrawals in the Caribbean National Forest (CNF) in Puerto Rico. 4. The index of longitudinal riverine connectivity (ILRC), is used to classify 17 water intakes in streams draining the CNF as having low, moderate, or high connectivity in terms of shrimp migration in both directions. An in-depth comparison of two streams showed that the stream characterized by higher water withdrawal had low connectivity, even during wet periods. Severity of effects is illustrated by a drought year, where the most downstream intake caused 100% larval shrimp mortality 78% of the year. 5. The ranking system provided by the index can be used as a tool for conservation ecologists and water resource managers to evaluate the relative vulnerability of migratory biota in streams, across different scales (reach-network), to seasonally low flows and extended drought. This information can be used to help evaluate the environmental tradeoffs of future water withdrawals. ?? 2008 John Wiley & Sons, Ltd.

  4. Academic Self-Concepts in Ability Streams: Considering Domain Specificity and Same-Stream Peers

    Science.gov (United States)

    Liem, Gregory Arief D.; McInerney, Dennis M.; Yeung, Alexander S.

    2015-01-01

    The study examined the relations between academic achievement and self-concepts in a sample of 1,067 seventh-grade students from 3 core ability streams in Singapore secondary education. Although between-stream differences in achievement were large, between-stream differences in academic self-concepts were negligible. Within each stream, levels of…

  5. Solar wind stream interfaces

    International Nuclear Information System (INIS)

    Gosling, J.T.; Asbridge, J.R.; Bame, S.J.; Feldman, W.C.

    1978-01-01

    Measurements aboard Imp 6, 7, and 8 reveal that approximately one third of all high-speed solar wind streams observed at 1 AU contain a sharp boundary (of thickness less than approx.4 x 10 4 km) near their leading edge, called a stream interface, which separates plasma of distinctly different properties and origins. Identified as discontinuities across which the density drops abruptly, the proton temperature increases abruptly, and the speed rises, stream interfaces are remarkably similar in character from one stream to the next. A superposed epoch analysis of plasma data has been performed for 23 discontinuous stream interfaces observed during the interval March 1971 through August 1974. Among the results of this analysis are the following: (1) a stream interface separates what was originally thick (i.e., dense) slow gas from what was originally thin (i.e., rare) fast gas; (2) the interface is the site of a discontinuous shear in the solar wind flow in a frame of reference corotating with the sun; (3) stream interfaces occur at speeds less than 450 km s - 1 and close to or at the maximum of the pressure ridge at the leading edges of high-speed streams; (4) a discontinuous rise by approx.40% in electron temperature occurs at the interface; and (5) discontinuous changes (usually rises) in alpha particle abundance and flow speed relative to the protons occur at the interface. Stream interfaces do not generally recur on successive solar rotations, even though the streams in which they are embedded often do. At distances beyond several astronomical units, stream interfaces should be bounded by forward-reverse shock pairs; three of four reverse shocks observed at 1 AU during 1971--1974 were preceded within approx.1 day by stream interfaces. Our observations suggest that many streams close to the sun are bounded on all sides by large radial velocity shears separating rapidly expanding plasma from more slowly expanding plasma

  6. Geochemical orientation survey of stream sediment, stream water, and ground water near uranium prospects, Monticello area, New York. National Uranium Resource Evaluation Program

    International Nuclear Information System (INIS)

    Rose, A.W.; Smith, A.T.; Wesolowski, D.

    1982-08-01

    A detailed geochemical test survey has been conducted in a 570 sq km area around six small copper-uranium prospects in sandstones of the Devonian Catskill Formation near Monticello in southern New York state. This report summarizes and interprets the data for about 500 stream sediment samples, 500 stream water samples, and 500 ground water samples, each analyzed for 40 to 50 elements. The groundwater samples furnish distinctive anomalies for uranium, helium, radon, and copper near the mineralized localities, but the samples must be segregated into aquifers in order to obtain continuous well-defined anomalies. Two zones of uranium-rich water (1 to 16 parts per billion) can be recognized on cross sections; the upper zone extends through the known occurrences. The anomalies in uranium and helium are strongest in the deeper parts of the aquifers and are diluted in samples from shallow wells. In stream water, copper and uranium are slightly anomalous, as in an ore factor derived from factor analysis. Ratios of copper, uranium, and zinc to conductivity improve the resolution of anomalies. In stream sediment, extractable uranium, copper, niobium, vanadium, and an ore factor furnish weak anomalies, and ratios of uranium and copper to zinc improve the definition of anomalies. The uranium/thorium ratio is not helpful. Published analyses of rock samples from the nearby stratigraphic section show distinct anomalies in the zone containing the copper-uranium occurrences. This report is being issued without the normal detailed technical and copy editing, to make the data available to the public before the end of the National Uranium Reconnaissance Evaluation program

  7. Geochemical orientation survey of stream sediment, stream water, and ground water near uranium prospects, Monticello area, New York. National Uranium Resource Evaluation Program

    Energy Technology Data Exchange (ETDEWEB)

    Rose, A. W.; Smith, A. T.; Wesolowski, D.

    1982-08-01

    A detailed geochemical test survey has been conducted in a 570 sq km area around six small copper-uranium prospects in sandstones of the Devonian Catskill Formation near Monticello in southern New York state. This report summarizes and interprets the data for about 500 stream sediment samples, 500 stream water samples, and 500 ground water samples, each analyzed for 40 to 50 elements. The groundwater samples furnish distinctive anomalies for uranium, helium, radon, and copper near the mineralized localities, but the samples must be segregated into aquifers in order to obtain continuous well-defined anomalies. Two zones of uranium-rich water (1 to 16 parts per billion) can be recognized on cross sections; the upper zone extends through the known occurrences. The anomalies in uranium and helium are strongest in the deeper parts of the aquifers and are diluted in samples from shallow wells. In stream water, copper and uranium are slightly anomalous, as in an ore factor derived from factor analysis. Ratios of copper, uranium, and zinc to conductivity improve the resolution of anomalies. In stream sediment, extractable uranium, copper, niobium, vanadium, and an ore factor furnish weak anomalies, and ratios of uranium and copper to zinc improve the definition of anomalies. The uranium/thorium ratio is not helpful. Published analyses of rock samples from the nearby stratigraphic section show distinct anomalies in the zone containing the copper-uranium occurrences. This report is being issued without the normal detailed technical and copy editing, to make the data available to the public before the end of the National Uranium Reconnaissance Evaluation program.

  8. Various aspects of vehicles image data-streams reduction for road traffic sufficient description

    Directory of Open Access Journals (Sweden)

    Jan PIECHA

    2007-01-01

    Full Text Available The on-line image processing was implemented for video-camera usage for traffic control. Due to reduce the immense data sets dimension various speculations of data sampling methods were introduced. At the beginning the needed sampling ratio has been found then simple but effective image processing algorithms have to be chosen, finally the hardware solutions for parallel processing are discussed. The PLA computing engine was involved for coping with this task; for fulfilling the assumed characteristics. The developer has to consider several restrictions and preferences. None universal algorithm is available up to now. The reported works, concern vehicles stream recorders development that has to do all recording and computing procedures in strictly defined time limits.

  9. The large scale and long term evolution of the solar wind speed distribution and high speed streams

    International Nuclear Information System (INIS)

    Intriligator, D.S.

    1977-01-01

    The spatial and temporal evolution of the solar wind speed distribution and of high speed streams in the solar wind are examined. Comparisons of the solar wind streaming speeds measured at Earth, Pioneer 11, and Pioneer 10 indicate that between 1 AU and 6.4 AU the solar wind speed distributions are narrower (i.e. the 95% value minus the 5% value of the solar wind streaming speed is less) at extended heliocentric distances. These observations are consistent with one exchange of momentum in the solar wind between high speed streams and low speed streams as they propagate outward from the Sun. Analyses of solar wind observations at 1 AU from mid 1964 through 1973 confirm the earlier results reported by Intriligator (1974) that there are statistically significant variations in the solar wind in 1968 and 1969, years of solar maximum. High speed stream parameters show that the number of high speed streams in the solar wind in 1968 and 1969 is considerably more than the predicted yearly average, and in 1965 and 1972 less. Histograms of solar wind speed from 1964 through 1973 indicate that in 1968 there was the highest percentage of elevated solar wind speeds and in 1965 and 1972 the lowest. Studies by others also confirm these results although the respective authors did not indicate this fact. The duration of the streams and the histograms for 1973 imply a shifting in the primary stream source. (Auth.)

  10. An Efficient Algorithm for the Discrete Gabor Transform using full length Windows

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2007-01-01

    This paper extends the efficient factorization of the Gabor frame operator developed by Strohmer in [1] to the Gabor analysis/synthesis operator. This provides a fast method for computing the discrete Gabor transform (DGT) and several algorithms associated with it. The algorithm is used...

  11. Stream systems.

    Science.gov (United States)

    Jack E. Williams; Gordon H. Reeves

    2006-01-01

    Restored, high-quality streams provide innumerable benefits to society. In the Pacific Northwest, high-quality stream habitat often is associated with an abundance of salmonid fishes such as chinook salmon (Oncorhynchus tshawytscha), coho salmon (O. kisutch), and steelhead (O. mykiss). Many other native...

  12. Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

    Directory of Open Access Journals (Sweden)

    Hazem Khaled

    2015-01-01

    Full Text Available Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI. The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

  13. Tracking single dynamic MEG dipole sources using the projected Extended Kalman Filter.

    Science.gov (United States)

    Yao, Yuchen; Swindlehurst, A Lee

    2011-01-01

    This paper presents two new algorithms based on the Extended Kalman Filter (EKF) for tracking the parameters of single dynamic magnetoencephalography (MEG) dipole sources. We assume a dynamic MEG dipole source with possibly both time-varying location and dipole orientation. The standard EKF-based tracking algorithm performs well under the assumption that the dipole source components vary in time as a Gauss-Markov process, provided that the background noise is temporally stationary. We propose a Projected-EKF algorithm that is adapted to a more forgiving condition where the background noise is temporally nonstationary, as well as a Projected-GLS-EKF algorithm that works even more universally, when the dipole components vary arbitrarily from one sample to the next.

  14. Innovative hyperchaotic encryption algorithm for compressed video

    Science.gov (United States)

    Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang

    2002-12-01

    It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.

  15. A Streaming PCA VLSI Chip for Neural Data Compression.

    Science.gov (United States)

    Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi

    2017-12-01

    Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.

  16. Detection of Multi-Layer and Vertically-Extended Clouds Using A-Train Sensors

    Science.gov (United States)

    Joiner, J.; Vasilkov, A. P.; Bhartia, P. K.; Wind, G.; Platnick, S.; Menzel, W. P.

    2010-01-01

    The detection of mUltiple cloud layers using satellite observations is important for retrieval algorithms as well as climate applications. In this paper, we describe a relatively simple algorithm to detect multiple cloud layers and distinguish them from vertically-extended clouds. The algorithm can be applied to coincident passive sensors that derive both cloud-top pressure from the thermal infrared observations and an estimate of solar photon pathlength from UV, visible, or near-IR measurements. Here, we use data from the A-train afternoon constellation of satellites: cloud-top pressure, cloud optical thickness, the multi-layer flag from the Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) and the optical centroid cloud pressure from the Aura Ozone Monitoring Instrument (OMI). For the first time, we use data from the CloudSat radar to evaluate the results of a multi-layer cloud detection scheme. The cloud classification algorithms applied with different passive sensor configurations compare well with each other as well as with data from CloudSat. We compute monthly mean fractions of pixels containing multi-layer and vertically-extended clouds for January and July 2007 at the OMI spatial resolution (l2kmx24km at nadir) and at the 5kmx5km MODIS resolution used for infrared cloud retrievals. There are seasonal variations in the spatial distribution of the different cloud types. The fraction of cloudy pixels containing distinct multi-layer cloud is a strong function of the pixel size. Globally averaged, these fractions are approximately 20% and 10% for OMI and MODIS, respectively. These fractions may be significantly higher or lower depending upon location. There is a much smaller resolution dependence for fractions of pixels containing vertically-extended clouds (approx.20% for OMI and slightly less for MODIS globally), suggesting larger spatial scales for these clouds. We also find higher fractions of vertically-extended clouds over land as compared with

  17. ECG Denoising Using Marginalized Particle Extended Kalman Filter With an Automatic Particle Weighting Strategy.

    Science.gov (United States)

    Hesar, Hamed Danandeh; Mohebbi, Maryam

    2017-05-01

    In this paper, a model-based Bayesian filtering framework called the "marginalized particle-extended Kalman filter (MP-EKF) algorithm" is proposed for electrocardiogram (ECG) denoising. This algorithm does not have the extended Kalman filter (EKF) shortcoming in handling non-Gaussian nonstationary situations because of its nonlinear framework. In addition, it has less computational complexity compared with particle filter. This filter improves ECG denoising performance by implementing marginalized particle filter framework while reducing its computational complexity using EKF framework. An automatic particle weighting strategy is also proposed here that controls the reliance of our framework to the acquired measurements. We evaluated the proposed filter on several normal ECGs selected from MIT-BIH normal sinus rhythm database. To do so, artificial white Gaussian and colored noises as well as nonstationary real muscle artifact (MA) noise over a range of low SNRs from 10 to -5 dB were added to these normal ECG segments. The benchmark methods were the EKF and extended Kalman smoother (EKS) algorithms which are the first model-based Bayesian algorithms introduced in the field of ECG denoising. From SNR viewpoint, the experiments showed that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable. Owing to its nonlinear framework and particle weighting strategy, the proposed algorithm attained better results at all input SNRs in non-Gaussian nonstationary situations (such as presence of pink noise, brown noise, and real MA). In addition, the impact of the proposed filtering method on the distortion of diagnostic features of the ECG was investigated and compared with EKF/EKS methods using an ECG diagnostic distortion measure called the "Multi-Scale Entropy Based Weighted Distortion Measure" or MSEWPRD. The results revealed that our proposed

  18. PROXY-BASED PATCHING STREAM TRANSMISSION STRATEGY IN MOBILE STREAMING MEDIA SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Liao Jianxin; Lei Zhengxiong; Ma Xutao; Zhu Xiaomin

    2006-01-01

    A mobile transmission strategy, PMPatching (Proxy-based Mobile Patching) transmission strategy is proposed, it applies to the proxy-based mobile streaming media system in Wideband Code Division Multiple Access (WCDMA) network. Performance of the whole system can be improved by using patching stream to transmit anterior part of the suffix that had been played back, and by batching all the demands for the suffix arrived in prefix period and patching stream transmission threshold period. Experimental results show that this strategy can efficiently reduce average network transmission cost and number of channels consumed in central streaming media server.

  19. A decision support system using combined-classifier for high-speed data stream in smart grid

    Science.gov (United States)

    Yang, Hang; Li, Peng; He, Zhian; Guo, Xiaobin; Fong, Simon; Chen, Huajun

    2016-11-01

    Large volume of high-speed streaming data is generated by big power grids continuously. In order to detect and avoid power grid failure, decision support systems (DSSs) are commonly adopted in power grid enterprises. Among all the decision-making algorithms, incremental decision tree is the most widely used one. In this paper, we propose a combined classifier that is a composite of a cache-based classifier (CBC) and a main tree classifier (MTC). We integrate this classifier into a stream processing engine on top of the DSS such that high-speed steaming data can be transformed into operational intelligence efficiently. Experimental results show that our proposed classifier can return more accurate answers than other existing ones.

  20. A frequency domain subspace algorithm for mixed causal, anti-causal LTI systems

    NARCIS (Netherlands)

    Fraanje, Rufus; Verhaegen, Michel; Verdult, Vincent; Pintelon, Rik

    2003-01-01

    The paper extends the subspacc identification method to estimate state-space models from frequency response function (FRF) samples, proposed by McKelvey et al. (1996) for mixed causal/anti-causal systems, and shows that other frequency domain subspace algorithms can be extended similarly. The method

  1. GCA-w Algorithms for Traffic Simulation

    International Nuclear Information System (INIS)

    Hoffmann, R.

    2011-01-01

    The GCA-w model (Global Cellular Automata with write access) is an extension of the GCA (Global Cellular Automata) model, which is based on the cellular automata model (CA). Whereas the CA model uses static links to local neighbors, the GCA model uses dynamic links to potentially global neighbors. The GCA-w model is a further extension that allows modifying the neighbors' states. Thereby, neighbors can dynamically be activated or deactivated. Algorithms can be described more concisely and may execute more efficiently because redundant computations can be avoided. Modeling traffic flow is a good example showing the usefulness of the GCA-w model. The Nagel-Schreckenberg algorithm for traffic simulation is first described as CA and GCA, and then transformed into the GCA-w model. This algorithm is '' exclusive-write '', meaning that no write conflicts have to be resolved. Furthermore, this algorithm is extended, allowing to deactivate and to activate cars stuck in a traffic jam in order to save computation time and energy. (author)

  2. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  3. Solving the Extended Tree Knapsack Problem with xed cost ow ...

    African Journals Online (AJOL)

    Parts of the Local Access Telecommunication Network planning problem may be modelled as an Extended Tree Knapsack Problem. The Local Access Telecommunication Network can contribute up to 60% of the total network costs. This paper presents partitioning algorithms that use standard o-the-shelf software coupled ...

  4. The Magellanic Stream and debris clouds

    Energy Technology Data Exchange (ETDEWEB)

    For, B.-Q.; Staveley-Smith, L. [International Centre for Radio Astronomy Research, University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009 (Australia); Matthews, D. [Centre for Materials and Surface Science, La Trobe University, Melbourne, VIC 3086 (Australia); McClure-Griffiths, N. M., E-mail: biqing.for@icrar.org [CSIRO Astronomy and Space Science, Epping, NSW 1710 (Australia)

    2014-09-01

    We present a study of the discrete clouds and filaments in the Magellanic Stream using a new high-resolution survey of neutral hydrogen (H I) conducted with the H75 array of the Australia Telescope Compact Array, complemented by single-dish data from the Parkes Galactic All-Sky Survey. From the individual and combined data sets, we have compiled a catalog of 251 clouds and listed their basic parameters, including a morphological description useful for identifying cloud interactions. We find an unexpectedly large number of head-tail clouds in the region. The implication for the formation mechanism and evolution is discussed. The filaments appear to originate entirely from the Small Magellanic Cloud and extend into the northern end of the Magellanic Bridge.

  5. Design and algorithm research of high precision airborne infrared touch screen

    Science.gov (United States)

    Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan

    2016-10-01

    There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.

  6. Evaluation of the streaming-matrix method for discrete-ordinates duct-streaming calculations

    International Nuclear Information System (INIS)

    Clark, B.A.; Urban, W.T.; Dudziak, D.J.

    1983-01-01

    A new deterministic streaming technique called the Streaming Matrix Hybrid Method (SMHM) is applied to two realistic duct-shielding problems. The results are compared to standard discrete-ordinates and Monte Carlo calculations. The SMHM shows promise as an alternative deterministic streaming method to standard discrete-ordinates

  7. Extended phase graphs with anisotropic diffusion

    Science.gov (United States)

    Weigel, M.; Schwenk, S.; Kiselev, V. G.; Scheffler, K.; Hennig, J.

    2010-08-01

    The extended phase graph (EPG) calculus gives an elegant pictorial description of magnetization response in multi-pulse MR sequences. The use of the EPG calculus enables a high computational efficiency for the quantitation of echo intensities even for complex sequences with multiple refocusing pulses with arbitrary flip angles. In this work, the EPG concept dealing with RF pulses with arbitrary flip angles and phases is extended to account for anisotropic diffusion in the presence of arbitrary varying gradients. The diffusion effect can be expressed by specific diffusion weightings of individual magnetization pathways. This can be represented as an action of a linear operator on the magnetization state. The algorithm allows easy integration of diffusion anisotropy effects. The formalism is validated on known examples from literature and used to calculate the effective diffusion weighting in multi-echo sequences with arbitrary refocusing flip angles.

  8. An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem

    Science.gov (United States)

    Afshar Nadjafi, Behrouz; Shadrokh, Shahram

    This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.

  9. A globally convergent MC algorithm with an adaptive learning rate.

    Science.gov (United States)

    Peng, Dezhong; Yi, Zhang; Xiang, Yong; Zhang, Haixian

    2012-02-01

    This brief deals with the problem of minor component analysis (MCA). Artificial neural networks can be exploited to achieve the task of MCA. Recent research works show that convergence of neural networks based MCA algorithms can be guaranteed if the learning rates are less than certain thresholds. However, the computation of these thresholds needs information about the eigenvalues of the autocorrelation matrix of data set, which is unavailable in online extraction of minor component from input data stream. In this correspondence, we introduce an adaptive learning rate into the OJAn MCA algorithm, such that its convergence condition does not depend on any unobtainable information, and can be easily satisfied in practical applications.

  10. Algorithmic correspondence and completeness in modal logic. V. Recursive extensions of SQEMA

    DEFF Research Database (Denmark)

    Conradie, Willem; Goranko, Valentin; Vakarelov, Dimiter

    2010-01-01

    The previously introduced algorithm SQEMA computes first-order frame equivalents for modal formulae and also proves their canonicity. Here we extend SQEMA with an additional rule based on a recursive version of Ackermann's lemma, which enables the algorithm to compute local frame equivalents...... on the class of ‘recursive formulae’. We also show that a certain version of this algorithm guarantees the canonicity of the formulae on which it succeeds....

  11. Performance Analyses of IDEAL Algorithm on Highly Skewed Grid System

    Directory of Open Access Journals (Sweden)

    Dongliang Sun

    2014-03-01

    Full Text Available IDEAL is an efficient segregated algorithm for the fluid flow and heat transfer problems. This algorithm has now been extended to the 3D nonorthogonal curvilinear coordinates. Highly skewed grids in the nonorthogonal curvilinear coordinates can decrease the convergence rate and deteriorate the calculating stability. In this study, the feasibility of the IDEAL algorithm on highly skewed grid system is analyzed by investigating the lid-driven flow in the inclined cavity. It can be concluded that the IDEAL algorithm is more robust and more efficient than the traditional SIMPLER algorithm, especially for the highly skewed and fine grid system. For example, at θ = 5° and grid number = 70 × 70 × 70, the convergence rate of the IDEAL algorithm is 6.3 times faster than that of the SIMPLER algorithm, and the IDEAL algorithm can converge almost at any time step multiple.

  12. Stream of consciousness: Quantum and biochemical assumptions regarding psychopathology.

    Science.gov (United States)

    Tonello, Lucio; Cocchi, Massimo; Gabrielli, Fabio; Tuszynski, Jack A

    2017-04-01

    The accepted paradigms of mainstream neuropsychiatry appear to be incompletely adequate and in various cases offer equivocal analyses. However, a growing number of new approaches are being proposed that suggest the emergence of paradigm shifts in this area. In particular, quantum theories of mind, brain and consciousness seem to offer a profound change to the current approaches. Unfortunately these quantum paradigms harbor at least two serious problems. First, they are simply models, theories, and assumptions, with no convincing experiments supporting their claims. Second, they deviate from contemporary mainstream views of psychiatric illness and do so in revolutionary ways. We suggest a possible way to integrate experimental neuroscience with quantum models in order to address outstanding issues in psychopathology. A key role is played by the phenomenon called the "stream of consciousness", which can be linked to the so-called "Gamma Synchrony" (GS), which is clearly demonstrated by EEG data. In our novel proposal, a unipolar depressed patient could be seen as a subject with an altered stream of consciousness. In particular, some clues suggest that depression is linked to an "increased power" stream of consciousness. It is additionally suggested that such an approach to depression might be extended to psychopathology in general with potential benefits to diagnostics and therapeutics in neuropsychiatry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. InfoRoute: the CISMeF Context-specific Search Algorithm.

    Science.gov (United States)

    Merabti, Tayeb; Lelong, Romain; Darmoni, Stefan

    2015-01-01

    The aim of this paper was to present a practical InfoRoute algorithm and applications developed by CISMeF to perform a contextual information retrieval across multiple medical websites in different health domains. The algorithm was developed to treat multiple types of queries: natural, Boolean and advanced. The algorithm also generates multiple types of queries: Boolean query, PubMed query or Advanced query. Each query can be extended via an inter alignments relationship from UMLS and HeTOP portal. A web service and two web applications have been developed based on the InfoRoute algorithm to generate links-query across multiple websites, i.e.: "PubMed" or "ClinicalTrials.org". The InfoRoute algorithm is a useful tool to perform contextual information retrieval across multiple medical websites in both English and French.

  14. Consequences of variation in stream-landscape connections for stream nitrate retention and export

    Science.gov (United States)

    Handler, A. M.; Helton, A. M.; Grimm, N. B.

    2017-12-01

    Hydrologic and material connections among streams, the surrounding terrestrial landscape, and groundwater systems fluctuate between extremes in dryland watersheds, yet the consequences of this variation for stream nutrient retention and export remain uncertain. We explored how seasonal variation in hydrologic connection among streams, landscapes, and groundwater affect nitrate and ammonium concentrations across a dryland stream network and how this variation mediates in-stream nitrate uptake and watershed export. We conducted spatial surveys of stream nitrate and ammonium concentration across the 1200 km2 Oak Creek watershed in central Arizona (USA). In addition, we conducted pulse releases of a solution containing biologically reactive sodium nitrate, with sodium chloride as a conservative hydrologic tracer, to estimate nitrate uptake rates in the mainstem (Q>1000 L/s) and two tributaries. Nitrate and ammonium concentrations generally increased from headwaters to mouth in the mainstem. Locally elevated concentrations occurred in spring-fed tributaries draining fish hatcheries and larger irrigation ditches, but did not have a substantial effect on the mainstem nitrogen load. Ambient nitrate concentration (as N) ranged from below the analytical detection limit of 0.005 mg/L to 0.43 mg/L across all uptake experiments. Uptake length—average stream distance traveled for a nutrient atom from the point of release to its uptake—at ambient concentration ranged from 250 to 704 m and increased significantly with higher discharge, both across streams and within the same stream on different experiment dates. Vertical uptake velocity and aerial uptake rate ranged from 6.6-10.6 mm min-1 and 0.03 to 1.4 mg N m-2 min-1, respectively. Preliminary analyses indicate potentially elevated nitrogen loading to the lower portion of the watershed during seasonal precipitation events, but overall, the capacity for nitrate uptake is high in the mainstem and tributaries. Ongoing work

  15. Online Tracking Algorithms on GPUs for the P̅ANDA Experiment at FAIR

    Science.gov (United States)

    Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; Adinetz, A.; Kraus, J.; Pleiter, D.

    2015-12-01

    P̅ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented.

  16. Streaming tearing mode

    Science.gov (United States)

    Shigeta, M.; Sato, T.; Dasgupta, B.

    1985-01-01

    The magnetohydrodynamic stability of streaming tearing mode is investigated numerically. A bulk plasma flow parallel to the antiparallel magnetic field lines and localized in the neutral sheet excites a streaming tearing mode more strongly than the usual tearing mode, particularly for the wavelength of the order of the neutral sheet width (or smaller), which is stable for the usual tearing mode. Interestingly, examination of the eigenfunctions of the velocity perturbation and the magnetic field perturbation indicates that the streaming tearing mode carries more energy in terms of the kinetic energy rather than the magnetic energy. This suggests that the streaming tearing mode instability can be a more feasible mechanism of plasma acceleration than the usual tearing mode instability.

  17. A Clustering Approach Using Cooperative Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Wenping Zou

    2010-01-01

    Full Text Available Artificial Bee Colony (ABC is one of the most recently introduced algorithms based on the intelligent foraging behavior of a honey bee swarm. This paper presents an extended ABC algorithm, namely, the Cooperative Article Bee Colony (CABC, which significantly improves the original ABC in solving complex optimization problems. Clustering is a popular data analysis and data mining technique; therefore, the CABC could be used for solving clustering problems. In this work, first the CABC algorithm is used for optimizing six widely used benchmark functions and the comparative results produced by ABC, Particle Swarm Optimization (PSO, and its cooperative version (CPSO are studied. Second, the CABC algorithm is used for data clustering on several benchmark data sets. The performance of CABC algorithm is compared with PSO, CPSO, and ABC algorithms on clustering problems. The simulation results show that the proposed CABC outperforms the other three algorithms in terms of accuracy, robustness, and convergence speed.

  18. Projected warming portends seasonal shifts of stream temperatures in the Crown of the Continent Ecosystem, USA and Canada

    Science.gov (United States)

    Jones, Leslie A.; Muhlfeld, Clint C.; Marshall, Lucy A.

    2017-01-01

    Climate warming is expected to increase stream temperatures in mountainous regions of western North America, yet the degree to which future climate change may influence seasonal patterns of stream temperature is uncertain. In this study, a spatially explicit statistical model framework was integrated with empirical stream temperature data (approximately four million bi-hourly recordings) and high-resolution climate and land surface data to estimate monthly stream temperatures and potential change under future climate scenarios in the Crown of the Continent Ecosystem, USA and Canada (72,000 km2). Moderate and extreme warming scenarios forecast increasing stream temperatures during spring, summer, and fall, with the largest increases predicted during summer (July, August, and September). Additionally, thermal regimes characteristic of current August temperatures, the warmest month of the year, may be exceeded during July and September, suggesting an earlier and extended duration of warm summer stream temperatures. Models estimate that the largest magnitude of temperature warming relative to current conditions may be observed during the shoulder months of winter (April and November). Summer stream temperature warming is likely to be most pronounced in glacial-fed streams where models predict the largest magnitude (> 50%) of change due to the loss of alpine glaciers. We provide the first broad-scale analysis of seasonal climate effects on spatiotemporal patterns of stream temperature in the Crown of the Continent Ecosystem for better understanding climate change impacts on freshwater habitats and guiding conservation and climate adaptation strategies.

  19. Continuous Distributed Top-k Monitoring over High-Speed Rail Data Stream in Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Hanning Wang

    2013-01-01

    Full Text Available In the environment of cloud computing, real-time mass data about high-speed rail which is based on the intense monitoring of large scale perceived equipment provides strong support for the safety and maintenance of high-speed rail. In this paper, we focus on the Top-k algorithm of continuous distribution based on Multisource distributed data stream for high-speed rail monitoring. Specifically, we formalized Top-k monitoring model of high-speed rail and proposed DTMR that is the Top-k monitoring algorithm with random, continuous, or strictly monotone aggregation functions. The DTMR was proved to be valid by lots of experiments.

  20. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    International Nuclear Information System (INIS)

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-01-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  1. Quantum algorithms for the ordered search problem via semidefinite programming

    International Nuclear Information System (INIS)

    Childs, Andrew M.; Landahl, Andrew J.; Parrilo, Pablo A.

    2007-01-01

    One of the most basic computational problems is the task of finding a desired item in an ordered list of N items. While the best classical algorithm for this problem uses log 2 N queries to the list, a quantum computer can solve the problem using a constant factor fewer queries. However, the precise value of this constant is unknown. By characterizing a class of quantum query algorithms for the ordered search problem in terms of a semidefinite program, we find quantum algorithms for small instances of the ordered search problem. Extending these algorithms to arbitrarily large instances using recursion, we show that there is an exact quantum ordered search algorithm using 4 log 605 N≅0.433 log 2 N queries, which improves upon the previously best known exact algorithm

  2. The distribution of copper in stream sediments in an anomalous stream near Steinkopf, Namaqualand

    International Nuclear Information System (INIS)

    De Bruin, D.

    1987-01-01

    Anomalous copper concentrations detected by the regional stream-sediment programme of the Geological Survey was investigated in a stream near Steinkopf, Namaqualand. A follow-up disclosed the presence of malachite mineralization. However, additional stream-sediment samples collected from the 'anomalous' stream revealed an erratic distribution of copper and also that the malachite mineralization had no direct effect on the copper distribution in the stream sediments. Low partial-extraction yields, together with X-ray diffraction analyses, indicated that dispersion is mainly mechanical and that the copper occurs as cations in the lattice of the biotite fraction of the stream sediments. (author). 8 refs., 5 figs., 1 tab

  3. The distribution of copper in stream sediments in an anomalous stream near Steinkopf, Namaqualand

    Energy Technology Data Exchange (ETDEWEB)

    De Bruin, D

    1987-01-01

    Anomalous copper concentrations detected by the regional stream-sediment programme of the Geological Survey was investigated in a stream near Steinkopf, Namaqualand. A follow-up disclosed the presence of malachite mineralization. However, additional stream-sediment samples collected from the 'anomalous' stream revealed an erratic distribution of copper and also that the malachite mineralization had no direct effect on the copper distribution in the stream sediments. Low partial-extraction yields, together with X-ray diffraction analyses, indicated that dispersion is mainly mechanical and that the copper occurs as cations in the lattice of the biotite fraction of the stream sediments. (author). 8 refs., 5 figs., 1 tab.

  4. Continental hydrosystem modelling: the concept of nested stream-aquifer interfaces

    Science.gov (United States)

    Flipo, N.; Mouhri, A.; Labarthe, B.; Biancamaria, S.; Rivière, A.; Weill, P.

    2014-08-01

    Coupled hydrological-hydrogeological models, emphasising the importance of the stream-aquifer interface, are more and more used in hydrological sciences for pluri-disciplinary studies aiming at investigating environmental issues. Based on an extensive literature review, stream-aquifer interfaces are described at five different scales: local [10 cm-~10 m], intermediate [~10 m-~1 km], watershed [10 km2-~1000 km2], regional [10 000 km2-~1 M km2] and continental scales [>10 M km2]. This led us to develop the concept of nested stream-aquifer interfaces, which extends the well-known vision of nested groundwater pathways towards the surface, where the mixing of low frequency processes and high frequency processes coupled with the complexity of geomorphological features and heterogeneities creates hydrological spiralling. This conceptual framework allows the identification of a hierarchical order of the multi-scale control factors of stream-aquifer hydrological exchanges, from the larger scale to the finer scale. The hyporheic corridor, which couples the river to its 3-D hyporheic zone, is then identified as the key component for scaling hydrological processes occurring at the interface. The identification of the hyporheic corridor as the support of the hydrological processes scaling is an important step for the development of regional studies, which is one of the main concerns for water practitioners and resources managers. In a second part, the modelling of the stream-aquifer interface at various scales is investigated with the help of the conductance model. Although the usage of the temperature as a tracer of the flow is a robust method for the assessment of stream-aquifer exchanges at the local scale, there is a crucial need to develop innovative methodologies for assessing stream-aquifer exchanges at the regional scale. After formulating the conductance model at the regional and intermediate scales, we address this challenging issue with the development of an

  5. A Selectivity based approach to Continuous Pattern Detection in Streaming Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sutanay; Holder, Larry; Chin, George; Agarwal, Khushbu; Feo, John T.

    2015-05-27

    Cyber security is one of the most significant technical challenges in current times. Detecting adversarial activities, prevention of theft of intellectual properties and customer data is a high priority for corporations and government agencies around the world. Cyber defenders need to analyze massive-scale, high-resolution network flows to identify, categorize, and mitigate attacks involving networks spanning institutional and national boundaries. Many of the cyber attacks can be described as subgraph patterns, with prominent examples being insider infiltrations (path queries), denial of service (parallel paths) and malicious spreads (tree queries). This motivates us to explore subgraph matching on streaming graphs in a continuous setting. The novelty of our work lies in using the subgraph distributional statistics collected from the streaming graph to determine the query processing strategy. We introduce a ``Lazy Search" algorithm where the search strategy is decided on a vertex-to-vertex basis depending on the likelihood of a match in the vertex neighborhood. We also propose a metric named ``Relative Selectivity" that is used to select between different query processing strategies. Our experiments performed on real online news, network traffic stream and a synthetic social network benchmark demonstrate 10-100x speedups over non-incremental, selectivity agnostic approaches.

  6. A morphological comparison of narrow, low-gradient streams traversing wetland environments to alluvial streams.

    Science.gov (United States)

    Jurmu, Michael C

    2002-12-01

    Twelve morphological features from research on alluvial streams are compared in four narrow, low-gradient wetland streams located in different geographic regions (Connecticut, Indiana, and Wisconsin, USA). All four reaches differed in morphological characteristics in five of the features compared (consistent bend width, bend cross-sectional shape, riffle width compared to pool width, greatest width directly downstream of riffles, and thalweg location), while three reaches differed in two comparisons (mean radius of curvature to width ratio and axial wavelength to width ratio). The remaining five features compared had at least one reach where different characteristics existed. This indicates the possibility of varying morphology for streams traversing wetland areas further supporting the concept that the unique qualities of wetland environments might also influence the controls on fluvial dynamics and the development of streams. If certain morphological features found in streams traversing wetland areas differ from current fluvial principles, then these varying features should be incorporated into future wetland stream design and creation projects. The results warrant further research on other streams traversing wetlands to determine if streams in these environments contain unique morphology and further investigation of the impact of low-energy fluvial processes on morphological development. Possible explanations for the morphology deviations in the study streams and some suggestions for stream design in wetland areas based upon the results and field observations are also presented.

  7. Retrieval of Sentence Sequences for an Image Stream via Coherence Recurrent Convolutional Networks.

    Science.gov (United States)

    Park, Cesc Chunseong; Kim, Youngjin; Kim, Gunhee

    2018-04-01

    We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. For retrieving a coherent flow of multiple sentences for a photo stream, we propose a multimodal neural architecture called coherence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional long short-term memory (LSTM) networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We collect more than 22 K unique blog posts with 170 K associated images for the travel topics of NYC, Disneyland , Australia, and Hawaii. We demonstrate that our approach outperforms other state-of-the-art image captioning methods for text sequence generation, using both quantitative measures and user studies via Amazon Mechanical Turk.

  8. Real-time algorithm for acoustic imaging with a microphone array.

    Science.gov (United States)

    Huang, Xun

    2009-05-01

    Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.

  9. InSTREAM: the individual-based stream trout research and environmental assessment model

    Science.gov (United States)

    Steven F. Railsback; Bret C. Harvey; Stephen K. Jackson; Roland H. Lamberson

    2009-01-01

    This report documents Version 4.2 of InSTREAM, including its formulation, software, and application to research and management problems. InSTREAM is a simulation model designed to understand how stream and river salmonid populations respond to habitat alteration, including altered flow, temperature, and turbidity regimes and changes in channel morphology. The model...

  10. Re-Meandering of Lowland Streams

    DEFF Research Database (Denmark)

    Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai

    2014-01-01

    We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats and macroinver...

  11. Deforestation and benthic indicators: how much vegetation cover is needed to sustain healthy Andean streams?

    Science.gov (United States)

    Iñiguez-Armijos, Carlos; Leiva, Adrián; Frede, Hans-Georg; Hampel, Henrietta; Breuer, Lutz

    2014-01-01

    Deforestation in the tropical Andes is affecting ecological conditions of streams, and determination of how much forest should be retained is a pressing task for conservation, restoration and management strategies. We calculated and analyzed eight benthic metrics (structural, compositional and water quality indices) and a physical-chemical composite index with gradients of vegetation cover to assess the effects of deforestation on macroinvertebrate communities and water quality of 23 streams in southern Ecuadorian Andes. Using a geographical information system (GIS), we quantified vegetation cover at three spatial scales: the entire catchment, the riparian buffer of 30 m width extending the entire stream length, and the local scale defined for a stream reach of 100 m in length and similar buffer width. Macroinvertebrate and water quality metrics had the strongest relationships with vegetation cover at catchment and riparian scales, while vegetation cover did not show any association with the macroinvertebrate metrics at local scale. At catchment scale, the water quality metrics indicate that ecological condition of Andean streams is good when vegetation cover is over 70%. Further, macroinvertebrate community assemblages were more diverse and related in catchments largely covered by native vegetation (>70%). Our results suggest that retaining an important quantity of native vegetation cover within the catchments and a linkage between headwater and riparian forests help to maintain and improve stream biodiversity and water quality in Andean streams affected by deforestation. This research proposes that a strong regulation focused to the management of riparian buffers can be successful when decision making is addressed to conservation/restoration of Andean catchments.

  12. A Degree Distribution Optimization Algorithm for Image Transmission

    Science.gov (United States)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  13. Adaptive live multicast video streaming of SVC with UEP FEC

    Science.gov (United States)

    Lev, Avram; Lasry, Amir; Loants, Maoz; Hadar, Ofer

    2014-09-01

    Ideally, video streaming systems should provide the best quality video a user's device can handle without compromising on downloading speed. In this article, an improved video transmission system is presented which dynamically enhances the video quality based on a user's current network state and repairs errors from data lost in the video transmission. The system incorporates three main components: Scalable Video Coding (SVC) with three layers, multicast based on Receiver Layered Multicast (RLM) and an UnEqual Forward Error Correction (FEC) algorithm. The SVC provides an efficient method for providing different levels of video quality, stored as enhancement layers. In the presented system, a proportional-integral-derivative (PID) controller was implemented to dynamically adjust the video quality, adding or subtracting quality layers as appropriate. In addition, an FEC algorithm was added to compensate for data lost in transmission. A two dimensional FEC was used. The FEC algorithm came from the Pro MPEG code of practice #3 release 2. Several bit errors scenarios were tested (step function, cosine wave) with different bandwidth size and error values were simulated. The suggested scheme which includes SVC video encoding with 3 layers over IP Multicast with Unequal FEC algorithm was investigated under different channel conditions, variable bandwidths and different bit error rates. The results indicate improvement of the video quality in terms of PSNR over previous transmission schemes.

  14. Minería de datos sobre streams de redes sociales, una herramienta al servicio de la Bibliotecología = Data Mining Streams of Social Networks, A Tool to Improve The Library Services

    Directory of Open Access Journals (Sweden)

    Sonia Jaramillo Valbuena

    2015-12-01

    , Facebook, RSS feeds and blogs, generate a large amount of unstructured data streams. They can be used to the problem of mining topic-specific influence, graph mining, opinion mining and recommender systems, thus achieving that libraries can obtain maximum benefit from the use of Information and Communication Technologies. From the perspective of data stream mining, the processing of these streams poses significant challenges. The algorithms must be adapted to problems such as: high arrival rate, memory requirements without restrictions, diverse sources of data and concept-drift. In this work, we explore the current state-of-the-art solutions of data stream mining originating from social networks, specifically, Facebook and Twitter. We present a review of the most representative algorithms and how they contribute to knowledge discovery in the area of librarianship. We conclude by presenting some of the problems that are the subject of active research.

  15. Quantum walks and search algorithms

    CERN Document Server

    Portugal, Renato

    2013-01-01

    This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...

  16. Experimental investigation of acoustic streaming in a cylindrical wave guide up to high streaming Reynolds numbers.

    Science.gov (United States)

    Reyt, Ida; Bailliet, Hélène; Valière, Jean-Christophe

    2014-01-01

    Measurements of streaming velocity are performed by means of Laser Doppler Velocimetry and Particle Image Velociimetry in an experimental apparatus consisting of a cylindrical waveguide having one loudspeaker at each end for high intensity sound levels. The case of high nonlinear Reynolds number ReNL is particularly investigated. The variation of axial streaming velocity with respect to the axial and to the transverse coordinates are compared to available Rayleigh streaming theory. As expected, the measured streaming velocity agrees well with the Rayleigh streaming theory for small ReNL but deviates significantly from such predictions for high ReNL. When the nonlinear Reynolds number is increased, the outer centerline axial streaming velocity gets distorted towards the acoustic velocity nodes until counter-rotating additional vortices are generated near the acoustic velocity antinodes. This kind of behavior is followed by outer streaming cells only and measurements in the near wall region show that inner streaming vortices are less affected by this substantial evolution of fast streaming pattern. Measurements of the transient evolution of streaming velocity provide an additional insight into the evolution of fast streaming.

  17. A PETAL OF THE SUNFLOWER: PHOTOMETRY OF THE STELLAR TIDAL STREAM IN THE HALO OF MESSIER 63 (NGC 5055)

    International Nuclear Information System (INIS)

    Chonis, Taylor S.; Martínez-Delgado, David; Gabany, R. Jay; Majewski, Steven R.; Hill, Gary J.; Gralak, Ray; Trujillo, Ignacio

    2011-01-01

    We present deep surface photometry of a very faint, giant arc-loop feature in the halo of the nearby spiral galaxy NGC 5055 (M63) that is consistent with being a part of a stellar stream resulting from the disruption of a dwarf satellite galaxy. This faint feature was first detected in early photographic studies by van der Kruit; more recently, in the study of Martínez-Delgado and as presented in this work, from the loop has been realized to be the result of a recent minor merger through evidence obtained by wide-field, deep images taken with a telescope of only 0.16 m aperture. The stellar stream is clearly confirmed in additional deep images taken with the 0.5 m telescope of the BlackBird Remote Observatory and the 0.8 m telescope of the McDonald Observatory. This low surface brightness (μ R ≈ 26 mag arcsec –2 ) arc-like structure around the disk of the galaxy extends 14.'0 (∼29 kpc projected) from its center, with a projected width of 1.'6 (∼3.3 kpc). The stream's morphology is consistent with that of the visible part of a giant, 'great-circle' type stellar stream originating from the recent accretion of a ∼10 8 M ☉ dwarf satellite in the last few Gyr. The progenitor satellite's current position and final fate are not conclusive from our data. The color of the stream's stars is consistent with dwarfs in the Local Group and is similar to the outer faint regions of M63's disk and stellar halo. From our photometric study, we detect other low surface brightness 'plumes'; some of these may be extended spiral features related to the galaxy's complex spiral structure, and others may be tidal debris associated with the disruption of the galaxy's outer stellar disk as a result of the accretion event. We are able to differentiate between features related to the tidal stream and faint, blue extended features in the outskirts of the galaxy's disk previously detected by the Galaxy Evolution Explorer satellite. With its highly warped H I gaseous disk (∼20

  18. A Statistical Method to Predict Flow Permanence in Dryland Streams from Time Series of Stream Temperature

    Directory of Open Access Journals (Sweden)

    Ivan Arismendi

    2017-12-01

    Full Text Available Intermittent and ephemeral streams represent more than half of the length of the global river network. Dryland freshwater ecosystems are especially vulnerable to changes in human-related water uses as well as shifts in terrestrial climates. Yet, the description and quantification of patterns of flow permanence in these systems is challenging mostly due to difficulties in instrumentation. Here, we took advantage of existing stream temperature datasets in dryland streams in the northwest Great Basin desert, USA, to extract critical information on climate-sensitive patterns of flow permanence. We used a signal detection technique, Hidden Markov Models (HMMs, to extract information from daily time series of stream temperature to diagnose patterns of stream drying. Specifically, we applied HMMs to time series of daily standard deviation (SD of stream temperature (i.e., dry stream channels typically display highly variable daily temperature records compared to wet stream channels between April and August (2015–2016. We used information from paired stream and air temperature data loggers as well as co-located stream temperature data loggers with electrical resistors as confirmatory sources of the timing of stream drying. We expanded our approach to an entire stream network to illustrate the utility of the method to detect patterns of flow permanence over a broader spatial extent. We successfully identified and separated signals characteristic of wet and dry stream conditions and their shifts over time. Most of our study sites within the entire stream network exhibited a single state over the entire season (80%, but a portion of them showed one or more shifts among states (17%. We provide recommendations to use this approach based on a series of simple steps. Our findings illustrate a successful method that can be used to rigorously quantify flow permanence regimes in streams using existing records of stream temperature.

  19. A statistical method to predict flow permanence in dryland streams from time series of stream temperature

    Science.gov (United States)

    Arismendi, Ivan; Dunham, Jason B.; Heck, Michael; Schultz, Luke; Hockman-Wert, David

    2017-01-01

    Intermittent and ephemeral streams represent more than half of the length of the global river network. Dryland freshwater ecosystems are especially vulnerable to changes in human-related water uses as well as shifts in terrestrial climates. Yet, the description and quantification of patterns of flow permanence in these systems is challenging mostly due to difficulties in instrumentation. Here, we took advantage of existing stream temperature datasets in dryland streams in the northwest Great Basin desert, USA, to extract critical information on climate-sensitive patterns of flow permanence. We used a signal detection technique, Hidden Markov Models (HMMs), to extract information from daily time series of stream temperature to diagnose patterns of stream drying. Specifically, we applied HMMs to time series of daily standard deviation (SD) of stream temperature (i.e., dry stream channels typically display highly variable daily temperature records compared to wet stream channels) between April and August (2015–2016). We used information from paired stream and air temperature data loggers as well as co-located stream temperature data loggers with electrical resistors as confirmatory sources of the timing of stream drying. We expanded our approach to an entire stream network to illustrate the utility of the method to detect patterns of flow permanence over a broader spatial extent. We successfully identified and separated signals characteristic of wet and dry stream conditions and their shifts over time. Most of our study sites within the entire stream network exhibited a single state over the entire season (80%), but a portion of them showed one or more shifts among states (17%). We provide recommendations to use this approach based on a series of simple steps. Our findings illustrate a successful method that can be used to rigorously quantify flow permanence regimes in streams using existing records of stream temperature.

  20. Distributed Extended Kalman Filter for Position, Velocity, Time, Estimation in Satellite Navigation Receivers

    Directory of Open Access Journals (Sweden)

    O. Jakubov

    2013-09-01

    Full Text Available Common techniques for position-velocity-time estimation in satellite navigation, iterative least squares and the extended Kalman filter, involve matrix operations. The matrix inversion and inclusion of a matrix library pose requirements on a computational power and operating platform of the navigation processor. In this paper, we introduce a novel distributed algorithm suitable for implementation in simple parallel processing units each for a tracked satellite. Such a unit performs only scalar sum, subtraction, multiplication, and division. The algorithm can be efficiently implemented in hardware logic. Given the fast position-velocity-time estimator, frequent estimates can foster dynamic performance of a vector tracking receiver. The algorithm has been designed from a factor graph representing the extended Kalman filter by splitting vector nodes into scalar ones resulting in a cyclic graph with few iterations needed. Monte Carlo simulations have been conducted to investigate convergence and accuracy. Simulation case studies for a vector tracking architecture and experimental measurements with a real-time software receiver developed at CTU in Prague were conducted. The algorithm offers compromises in stability, accuracy, and complexity depending on the number of iterations. In scenarios with a large number of tracked satellites, it can outperform the traditional methods at low complexity.

  1. Analysis of groundwater flow and stream depletion in L-shaped fluvial aquifers

    Science.gov (United States)

    Lin, Chao-Chih; Chang, Ya-Chi; Yeh, Hund-Der

    2018-04-01

    Understanding the head distribution in aquifers is crucial for the evaluation of groundwater resources. This article develops a model for describing flow induced by pumping in an L-shaped fluvial aquifer bounded by impermeable bedrocks and two nearly fully penetrating streams. A similar scenario for numerical studies was reported in Kihm et al. (2007). The water level of the streams is assumed to be linearly varying with distance. The aquifer is divided into two subregions and the continuity conditions of the hydraulic head and flux are imposed at the interface of the subregions. The steady-state solution describing the head distribution for the model without pumping is first developed by the method of separation of variables. The transient solution for the head distribution induced by pumping is then derived based on the steady-state solution as initial condition and the methods of finite Fourier transform and Laplace transform. Moreover, the solution for stream depletion rate (SDR) from each of the two streams is also developed based on the head solution and Darcy's law. Both head and SDR solutions in the real time domain are obtained by a numerical inversion scheme called the Stehfest algorithm. The software MODFLOW is chosen to compare with the proposed head solution for the L-shaped aquifer. The steady-state and transient head distributions within the L-shaped aquifer predicted by the present solution are compared with the numerical simulations and measurement data presented in Kihm et al. (2007).

  2. Smart Meter Data Analytics: Systems, Algorithms and Benchmarking

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Golab, Lukasz; Golab, Wojciech

    2016-01-01

    the proposed benchmark using five representative platforms: a traditional numeric computing platform (Matlab), a relational DBMS with a built-in machine learning toolkit (PostgreSQL/MADlib), a main-memory column store (“System C”), and two distributed data processing platforms (Hive and Spark/Spark Streaming......Smart electricity meters have been replacing conventional meters worldwide, enabling automated collection of fine-grained (e.g., every 15 minutes or hourly) consumption data. A variety of smart meter analytics algorithms and applications have been proposed, mainly in the smart grid literature......-line feature extraction and model building as well a framework for on-line anomaly detection that we propose. Second, since obtaining real smart meter data is difficult due to privacy issues, we present an algorithm for generating large realistic data sets from a small seed of real data. Third, we implement...

  3. Calculation of heat transfer in transversely stream-lined tube bundles with chess arrangement

    International Nuclear Information System (INIS)

    Migaj, V.K.

    1978-01-01

    A semiempirical theory of heat transfer in transversely stream-lined chess-board tube bundles has been developed. The theory is based on a single cylinder model and involves external flow parameter evaluation on the basis of the solidification principle of a vortex zone. The effect of turbulence is estimated according to experimental results. The method is extended to both average and local heat transfer coefficients. Comparison with experiment shows satisfactory agreement

  4. Parametrisation of a Maxwell model for transient tyre forces by means of an extended firefly algorithm

    Directory of Open Access Journals (Sweden)

    Andreas Hackl

    2016-12-01

    Full Text Available Developing functions for advanced driver assistance systems requires very accurate tyre models, especially for the simulation of transient conditions. In the past, parametrisation of a given tyre model based on measurement data showed shortcomings, and the globally optimal solution obtained did not appear to be plausible. In this article, an optimisation strategy is presented, which is able to find plausible and physically feasible solutions by detecting many local outcomes. The firefly algorithm mimics the natural behaviour of fireflies, which use a kind of flashing light to communicate with other members. An algorithm simulating the intensity of the light of a single firefly, diminishing with increasing distances, is implicitly able to detect local solutions on its way to the best solution in the search space. This implicit clustering feature is stressed by an additional explicit clustering step, where local solutions are stored and terminally processed to obtain a large number of possible solutions. The enhanced firefly algorithm will be first applied to the well-known Rastrigin functions and then to the tyre parametrisation problem. It is shown that the firefly algorithm is qualified to find a high number of optimisation solutions, which is required for plausible parametrisation for the given tyre model.

  5. Dynamic programming algorithms for biological sequence comparison.

    Science.gov (United States)

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  6. The second order extended Kalman filter and Markov nonlinear filter for data processing in interferometric systems

    International Nuclear Information System (INIS)

    Ermolaev, P; Volynsky, M

    2014-01-01

    Recurrent stochastic data processing algorithms using representation of interferometric signal as output of a dynamic system, which state is described by vector of parameters, in some cases are more effective, compared with conventional algorithms. Interferometric signals depend on phase nonlinearly. Consequently it is expedient to apply algorithms of nonlinear stochastic filtering, such as Kalman type filters. An application of the second order extended Kalman filter and Markov nonlinear filter that allows to minimize estimation error is described. Experimental results of signals processing are illustrated. Comparison of the algorithms is presented and discussed.

  7. Streams and their future inhabitants

    DEFF Research Database (Denmark)

    Sand-Jensen, K.; Friberg, Nikolai

    2006-01-01

    In this fi nal chapter we look ahead and address four questions: How do we improve stream management? What are the likely developments in the biological quality of streams? In which areas is knowledge on stream ecology insuffi cient? What can streams offer children of today and adults of tomorrow?...

  8. Cross-Layer Techniques for Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Yufeng Shan

    2005-02-01

    Full Text Available Real-time streaming media over wireless networks is a challenging proposition due to the characteristics of video data and wireless channels. In this paper, we propose a set of cross-layer techniques for adaptive real-time video streaming over wireless networks. The adaptation is done with respect to both channel and data. The proposed novel packetization scheme constructs the application layer packet in such a way that it is decomposed exactly into an integer number of equal-sized radio link protocol (RLP packets. FEC codes are applied within an application packet at the RLP packet level rather than across different application packets and thus reduce delay at the receiver. A priority-based ARQ, together with a scheduling algorithm, is applied at the application layer to retransmit only the corrupted RLP packets within an application layer packet. Our approach combines the flexibility and programmability of application layer adaptations, with low delay and bandwidth efficiency of link layer techniques. Socket-level simulations are presented to verify the effectiveness of our approach.

  9. Numerical Methods for Solution of the Extended Linear Quadratic Control Problem

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Frison, Gianluca; Gade-Nielsen, Nicolai Fog

    2012-01-01

    In this paper we present the extended linear quadratic control problem, its efficient solution, and a discussion of how it arises in the numerical solution of nonlinear model predictive control problems. The extended linear quadratic control problem is the optimal control problem corresponding...... to the Karush-Kuhn-Tucker system that constitute the majority of computational work in constrained nonlinear and linear model predictive control problems solved by efficient MPC-tailored interior-point and active-set algorithms. We state various methods of solving the extended linear quadratic control problem...... and discuss instances in which it arises. The methods discussed in the paper have been implemented in efficient C code for both CPUs and GPUs for a number of test examples....

  10. On implementation of the extended interior penalty function. [optimum structural design

    Science.gov (United States)

    Cassis, J. H.; Schmit, L. A., Jr.

    1976-01-01

    The extended interior penalty function formulation is implemented. A rational method for determining the transition between the interior and extended parts is set forth. The formulation includes a straightforward method for avoiding design points with some negative components, which are physically meaningless in structural analysis. The technique, when extended to problems involving parametric constraints, can facilitate closed form integration of the penalty terms over the most important parts of the parameter interval. The method lends itself well to the use of approximation concepts, such as design variable linking, constraint deletion and Taylor series expansions of response quantities in terms of design variables. Examples demonstrating the algorithm, in the context of planar orthogonal frames subjected to ground motion, are included.

  11. Salamander occupancy in headwater stream networks

    Science.gov (United States)

    Grant, E.H.C.; Green, L.E.; Lowe, W.H.

    2009-01-01

    1. Stream ecosystems exhibit a highly consistent dendritic geometry in which linear habitat units intersect to create a hierarchical network of connected branches. 2. Ecological and life history traits of species living in streams, such as the potential for overland movement, may interact with this architecture to shape patterns of occupancy and response to disturbance. Specifically, large-scale habitat alteration that fragments stream networks and reduces connectivity may reduce the probability a stream is occupied by sensitive species, such as stream salamanders. 3. We collected habitat occupancy data on four species of stream salamanders in first-order (i.e. headwater) streams in undeveloped and urbanised regions of the eastern U.S.A. We then used an information-theoretic approach to test alternative models of salamander occupancy based on a priori predictions of the effects of network configuration, region and salamander life history. 4. Across all four species, we found that streams connected to other first-order streams had higher occupancy than those flowing directly into larger streams and rivers. For three of the four species, occupancy was lower in the urbanised region than in the undeveloped region. 5. These results demonstrate that the spatial configuration of stream networks within protected areas affects the occurrences of stream salamander species. We strongly encourage preservation of network connections between first-order streams in conservation planning and management decisions that may affect stream species.

  12. Realization of Deutsch-like algorithm using ensemble computing

    International Nuclear Information System (INIS)

    Wei Daxiu; Luo Jun; Sun Xianping; Zeng Xizhi

    2003-01-01

    The Deutsch-like algorithm [Phys. Rev. A. 63 (2001) 034101] distinguishes between even and odd query functions using fewer function calls than its possible classical counterpart in a two-qubit system. But the similar method cannot be applied to a multi-qubit system. We propose a new approach for solving Deutsch-like problem using ensemble computing. The proposed algorithm needs an ancillary qubit and can be easily extended to multi-qubit system with one query. Our ensemble algorithm beginning with a easily-prepared initial state has three main steps. The classifications of the functions can be obtained directly from the spectra of the ancilla qubit. We also demonstrate the new algorithm in a four-qubit molecular system using nuclear magnetic resonance (NMR). One hydrogen and three carbons are selected as the four qubits, and one of carbons is ancilla qubit. We choice two unitary transformations, corresponding to two functions (one odd function and one even function), to validate the ensemble algorithm. The results show that our experiment is successfully and our ensemble algorithm for solving the Deutsch-like problem is virtual

  13. Advanced metaheuristic algorithms for laser optimization

    International Nuclear Information System (INIS)

    Tomizawa, H.

    2010-01-01

    A laser is one of the most important experimental tools. In synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for Xray-FELs, a laser has important roles as a seed light source or photo-cathode-illuminating light source to generate a high brightness electron bunch. The controls of laser pulse characteristics are required for many kinds of experiments. However, the laser should be tuned and customized for each requirement by laser experts. The automatic tuning of laser is required to realize with some sophisticated algorithms. The metaheuristic algorithm is one of the useful candidates to find one of the best solutions as acceptable as possible. The metaheuristic laser tuning system is expected to save our human resources and time for the laser preparations. I have shown successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles and a hill climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each experimental requirement. (author)

  14. Online Tracking Algorithms on GPUs for the P-barANDA Experiment at FAIR

    International Nuclear Information System (INIS)

    Bianchi, L; Herten, A; Ritman, J; Stockmanns, T; Adinetz, A.; Pleiter, D; Kraus, J

    2015-01-01

    P-barANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented. (paper)

  15. Stream hydraulics and temperature determine the metabolism of geothermal Icelandic streams

    Directory of Open Access Journals (Sweden)

    Demars B. O.L.

    2011-07-01

    Full Text Available Stream ecosystem metabolism plays a critical role in planetary biogeochemical cycling. Stream benthic habitat complexity and the available surface area for microbes relative to the free-flowing water volume are thought to be important determinants of ecosystem metabolism. Unfortunately, the engineered deepening and straightening of streams for drainage purposes could compromise stream natural services. Stream channel complexity may be quantitatively expressed with hydraulic parameters such as water transient storage, storage residence time, and water spiralling length. The temperature dependence of whole stream ecosystem respiration (ER, gross primary productivity (GPP and net ecosystem production (NEP = GPP − ER has recently been evaluated with a “natural experiment” in Icelandic geothermal streams along a 5–25 °C temperature gradient. There remained, however, a substantial amount of unexplained variability in the statistical models, which may be explained by hydraulic parameters found to be unrelated to temperature. We also specifically tested the additional and predicted synergistic effects of water transient storage and temperature on ER, using novel, more accurate, methods. Both ER and GPP were highly related to water transient storage (or water spiralling length but not to the storage residence time. While there was an additional effect of water transient storage and temperature on ER (r2 = 0.57; P = 0.015, GPP was more related to water transient storage than temperature. The predicted synergistic effect could not be confirmed, most likely due to data limitation. Our interpretation, based on causal statistical modelling, is that the metabolic balance of streams (NEP was primarily determined by the temperature dependence of respiration. Further field and experimental work is required to test the predicted synergistic effect on ER. Meanwhile, since higher metabolic activities allow for higher pollutant degradation or uptake

  16. A Performance Comparison Between Extended Kalman Filter and Unscented Kalman Filter in Power System Dynamic State Estimation

    DEFF Research Database (Denmark)

    Khazraj, Hesam; Silva, Filipe Miguel Faria da; Bak, Claus Leth

    2016-01-01

    Dynamic State Estimation (DSE) is a critical tool for analysis, monitoring and planning of a power system. The concept of DSE involves designing state estimation with Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF) methods, which can be used by wide area monitoring to improve......-linear state estimator is developed in MatLab to solve states by applying the unscented Kalman filter (UKF) and Extended Kalman Filter (EKF) algorithm. Finally, a DSE model is built for a 14 bus power system network to evaluate the proposed algorithm for the networks.This article will focus on comparing...

  17. The Advent of Streaming Television in Denmark - Ratings Revisited

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    Digital media convergence is turning television practices upside down, including advertising, the motives for political and administrative decisions, and also extends to planning, producing, distributing, and programming content (Buzzard 2012; Cunningham and Silver 2013; Havens 2014; Ihlebæk, Syv......-demand. It discusses the potential impact of declining accuracy of audience measurement on market actors’ decisions concerning streaming, as well as potential strategies for improving audience measurement.......Digital media convergence is turning television practices upside down, including advertising, the motives for political and administrative decisions, and also extends to planning, producing, distributing, and programming content (Buzzard 2012; Cunningham and Silver 2013; Havens 2014; Ihlebæk....... It is noteworthy that although the shift towards online television distribution entails that viewing become measurable by the existing system for online audience tracking (Gemius), market actors have so far failed in their attempt to consolidate online measurements of viewing time with audience ratings of flow...

  18. Image/Time Series Mining Algorithms: Applications to Developmental Biology, Document Processing and Data Streams

    Science.gov (United States)

    Tataw, Oben Moses

    2013-01-01

    Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…

  19. Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams

    Science.gov (United States)

    Mueller, M.; Groch, A.; Baumhauer, M.; Maier-Hein, L.; Teber, D.; Rassweiler, J.; Meinzer, H.-P.; Wegner, In.

    2012-02-01

    Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.

  20. MATLAB tensor classes for fast algorithm prototyping.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2004-10-01

    Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.

  1. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.

    Directory of Open Access Journals (Sweden)

    Ping Zeng

    Full Text Available In this paper, based on our previous multi-pattern uniform resource locator (URL binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.

  2. A combination-weighted Feldkamp-based reconstruction algorithm for cone-beam CT

    International Nuclear Information System (INIS)

    Mori, Shinichiro; Endo, Masahiro; Komatsu, Shuhei; Kandatsu, Susumu; Yashiro, Tomoyasu; Baba, Masayuki

    2006-01-01

    The combination-weighted Feldkamp algorithm (CW-FDK) was developed and tested in a phantom in order to reduce cone-beam artefacts and enhance cranio-caudal reconstruction coverage in an attempt to improve image quality when utilizing cone-beam computed tomography (CBCT). Using a 256-slice cone-beam CT (256CBCT), image quality (CT-number uniformity and geometrical accuracy) was quantitatively evaluated in phantom and clinical studies, and the results were compared to those obtained with the original Feldkamp algorithm. A clinical study was done in lung cancer patients under breath holding and free breathing. Image quality for the original Feldkamp algorithm is degraded at the edge of the scan region due to the missing volume, commensurate with the cranio-caudal distance between the reconstruction and central planes. The CW-FDK extended the reconstruction coverage to equal the scan coverage and improved reconstruction accuracy, unaffected by the cranio-caudal distance. The extended reconstruction coverage with good image quality provided by the CW-FDK will be clinically investigated for improving diagnostic and radiotherapy applications. In addition, this algorithm can also be adapted for use in relatively wide cone-angle CBCT such as with a flat-panel detector CBCT

  3. Improved FHT Algorithms for Fast Computation of the Discrete Hartley Transform

    Directory of Open Access Journals (Sweden)

    M. T. Hamood

    2013-05-01

    Full Text Available In this paper, by using the symmetrical properties of the discrete Hartley transform (DHT, an improved radix-2 fast Hartley transform (FHT algorithm with arithmetic complexity comparable to that of the real-valued fast Fourier transform (RFFT is developed. It has a simple and regular butterfly structure and possesses the in-place computation property. Furthermore, using the same principles, the development can be extended to more efficient radix-based FHT algorithms. An example for the improved radix-4 FHT algorithm is given to show the validity of the presented method. The arithmetic complexity for the new algorithms are computed and then compared with the existing FHT algorithms. The results of these comparisons have shown that the developed algorithms reduce the number of multiplications and additions considerably.

  4. A Quantised State Systems Approach for Jacobian Free Extended Kalman Filtering

    DEFF Research Database (Denmark)

    Alminde, Lars; Bendtsen, Jan Dimon; Stoustrup, Jakob

    2007-01-01

    Model based methods for control of intelligent autonomous systems rely on a state estimate being available. One of the most common methods to obtain a state estimate for non-linear systems is the Extended Kalman Filter (EKF) algorithm. In order to apply the EKF an expression must be available...

  5. Parallel algorithms for nuclear reactor analysis via domain decomposition method

    International Nuclear Information System (INIS)

    Kim, Yong Hee

    1995-02-01

    In this thesis, the neutron diffusion equation in reactor physics is discretized by the finite difference method and is solved on a parallel computer network which is composed of T-800 transputers. T-800 transputer is a message-passing type MIMD (multiple instruction streams and multiple data streams) architecture. A parallel variant of Schwarz alternating procedure for overlapping subdomains is developed with domain decomposition. The thesis provides convergence analysis and improvement of the convergence of the algorithm. The convergence of the parallel Schwarz algorithms with DN(or ND), DD, NN, and mixed pseudo-boundary conditions(a weighted combination of Dirichlet and Neumann conditions) is analyzed for both continuous and discrete models in two-subdomain case and various underlying features are explored. The analysis shows that the convergence rate of the algorithm highly depends on the pseudo-boundary conditions and the theoretically best one is the mixed boundary conditions(MM conditions). Also it is shown that there may exist a significant discrepancy between continuous model analysis and discrete model analysis. In order to accelerate the convergence of the parallel Schwarz algorithm, relaxation in pseudo-boundary conditions is introduced and the convergence analysis of the algorithm for two-subdomain case is carried out. The analysis shows that under-relaxation of the pseudo-boundary conditions accelerates the convergence of the parallel Schwarz algorithm if the convergence rate without relaxation is negative, and any relaxation(under or over) decelerates convergence if the convergence rate without relaxation is positive. Numerical implementation of the parallel Schwarz algorithm on an MIMD system requires multi-level iterations: two levels for fixed source problems, three levels for eigenvalue problems. Performance of the algorithm turns out to be very sensitive to the iteration strategy. In general, multi-level iterations provide good performance when

  6. A recurrence-weighted prediction algorithm for musical analysis

    Science.gov (United States)

    Colucci, Renato; Leguizamon Cucunuba, Juan Sebastián; Lloyd, Simon

    2018-03-01

    Forecasting the future behaviour of a system using past data is an important topic. In this article we apply nonlinear time series analysis in the context of music, and present new algorithms for extending a sample of music, while maintaining characteristics similar to the original piece. By using ideas from ergodic theory, we adapt the classical prediction method of Lorenz analogues so as to take into account recurrence times, and demonstrate with examples, how the new algorithm can produce predictions with a high degree of similarity to the original sample.

  7. Control algorithms and applications of the wavefront sensorless adaptive optics

    Science.gov (United States)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  8. A novel image encryption algorithm using chaos and reversible cellular automata

    Science.gov (United States)

    Wang, Xingyuan; Luan, Dapeng

    2013-11-01

    In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.

  9. Design of Optimized Multimedia Data Streaming Management Using OMDSM over Mobile Networks

    Directory of Open Access Journals (Sweden)

    Byungjoo Park

    2017-01-01

    Full Text Available Mobility management is an essential challenge for supporting reliable multimedia data streaming over wireless and mobile networks in the Internet of Things (IoT for location-based mobile marketing applications. The communications among mobile nodes for IoT need to have a seamless handover for delivering high quality multimedia services. The Internet Engineering Task Force (IETF mobility management schemes are the proposals for handling the routing of IPv6 packets to mobile nodes that have moved away from their home network. However, the standard mobility management scheme cannot prevent packet losses due to longer handover latency. In this article, a new enhanced data streaming route optimization scheme is introduced that uses an optimized Transmission Control Protocol (TCP realignment algorithm in order to prevent the packet disordering problem whenever the nodes in the IoT environment are communicating with each other. With the proposed scheme, data packets sequence realignment can be prevented, the packet traffic speed can be controlled, and the TCP performance can be improved. The experimental results show that managing the packet order in proposed new scheme remarkably increases the overall TCP performance over mobile networks within the IoT environment thus ensuring the high quality of service (QoS for multimedia data streaming in location-based mobile marketing applications.

  10. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    Science.gov (United States)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  11. Resonant Drag Instabilities in protoplanetary disks: the streaming instability and new, faster-growing instabilities

    Science.gov (United States)

    Squire, Jonathan; Hopkins, Philip F.

    2018-04-01

    We identify and study a number of new, rapidly growing instabilities of dust grains in protoplanetary disks, which may be important for planetesimal formation. The study is based on the recognition that dust-gas mixtures are generically unstable to a Resonant Drag Instability (RDI), whenever the gas, absent dust, supports undamped linear modes. We show that the "streaming instability" is an RDI associated with epicyclic oscillations; this provides simple interpretations for its mechanisms and accurate analytic expressions for its growth rates and fastest-growing wavelengths. We extend this analysis to more general dust streaming motions and other waves, including buoyancy and magnetohydrodynamic oscillations, finding various new instabilities. Most importantly, we identify the disk "settling instability," which occurs as dust settles vertically into the midplane of a rotating disk. For small grains, this instability grows many orders of magnitude faster than the standard streaming instability, with a growth rate that is independent of grain size. Growth timescales for realistic dust-to-gas ratios are comparable to the disk orbital period, and the characteristic wavelengths are more than an order of magnitude larger than the streaming instability (allowing the instability to concentrate larger masses). This suggests that in the process of settling, dust will band into rings then filaments or clumps, potentially seeding dust traps, high-metallicity regions that in turn seed the streaming instability, or even overdensities that coagulate or directly collapse to planetesimals.

  12. Dissonance and Neutralization of Subscription Streaming Era Digital Music Piracy : An Initial Exploration

    OpenAIRE

    Riekkinen, Janne

    2016-01-01

    Both legal and illegal forms of digital music consumption continue to evolve with wider adoption of subscription streaming services. With this paper, we aim to extend theory on digital music piracy by showing that the rising controversy and diminishing acceptance of illegal forms of consumption call for new theoretical components and interactions. We introduce a model that integrates insights from neutralization and cognitive dissonance theories. As an initial empirical test of th...

  13. Percent Forest Adjacent to Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  14. Percent Agriculture Adjacent to Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  15. Segment scheduling method for reducing 360° video streaming latency

    Science.gov (United States)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  16. Morphology of a Wetland Stream

    Science.gov (United States)

    Jurmu; Andrle

    1997-11-01

    / Little attention has been paid to wetland stream morphology in the geomorphological and environmental literature, and in the recently expanding wetland reconstruction field, stream design has been based primarily on stream morphologies typical of nonwetland alluvial environments. Field investigation of a wetland reach of Roaring Brook, Stafford, Connecticut, USA, revealed several significant differences between the morphology of this stream and the typical morphology of nonwetland alluvial streams. Six morphological features of the study reach were examined: bankfull flow, meanders, pools and riffles, thalweg location, straight reaches, and cross-sectional shape. It was found that bankfull flow definitions originating from streams in nonwetland environments did not apply. Unusual features observed in the wetland reach include tight bends and a large axial wavelength to width ratio. A lengthy straight reach exists that exceeds what is typically found in nonwetland alluvial streams. The lack of convex bank point bars in the bends, a greater channel width at riffle locations, an unusual thalweg location, and small form ratios (a deep and narrow channel) were also differences identified. Further study is needed on wetland streams of various regions to determine if differences in morphology between alluvial and wetland environments can be applied in order to improve future designs of wetland channels.KEY WORDS: Stream morphology; Wetland restoration; Wetland creation; Bankfull; Pools and riffles; Meanders; Thalweg

  17. Energy management algorithm for an optimum control of a photovoltaic water pumping system

    International Nuclear Information System (INIS)

    Sallem, Souhir; Chaabene, Maher; Kamoun, M.B.A.

    2009-01-01

    The effectiveness of photovoltaic water pumping systems depends on the adequacy between the generated energy and the volume of pumped water. This paper presents an intelligent algorithm which makes decision on the interconnection modes and instants of photovoltaic installation components: battery, water pump and photovoltaic panel. The decision is made by fuzzy rules on the basis of the Photovoltaic Panel Generation (PVPG) forecast during a considered day, on the load required power, and by considering the battery safety. The algorithm aims to extend operation time of the water pump by controlling a switching unit which links the system components with respect to multi objective management criteria. The algorithm implementation demonstrates that the approach extends the pumping period for more than 5 h a day which gives a mean daily improvement of 97% of the water pumped volume.

  18. Dynamical modeling of tidal streams

    International Nuclear Information System (INIS)

    Bovy, Jo

    2014-01-01

    I present a new framework for modeling the dynamics of tidal streams. The framework consists of simple models for the initial action-angle distribution of tidal debris, which can be straightforwardly evolved forward in time. Taking advantage of the essentially one-dimensional nature of tidal streams, the transformation to position-velocity coordinates can be linearized and interpolated near a small number of points along the stream, thus allowing for efficient computations of a stream's properties in observable quantities. I illustrate how to calculate the stream's average location (its 'track') in different coordinate systems, how to quickly estimate the dispersion around its track, and how to draw mock stream data. As a generative model, this framework allows one to compute the full probability distribution function and marginalize over or condition it on certain phase-space dimensions as well as convolve it with observational uncertainties. This will be instrumental in proper data analysis of stream data. In addition to providing a computationally efficient practical tool for modeling the dynamics of tidal streams, the action-angle nature of the framework helps elucidate how the observed width of the stream relates to the velocity dispersion or mass of the progenitor, and how the progenitors of 'orphan' streams could be located. The practical usefulness of the proposed framework crucially depends on the ability to calculate action-angle variables for any orbit in any gravitational potential. A novel method for calculating actions, frequencies, and angles in any static potential using a single orbit integration is described in the Appendix.

  19. Human impacts to mountain streams

    Science.gov (United States)

    Wohl, Ellen

    2006-09-01

    Mountain streams are here defined as channel networks within mountainous regions of the world. This definition encompasses tremendous diversity of physical and biological conditions, as well as history of land use. Human effects on mountain streams may result from activities undertaken within the stream channel that directly alter channel geometry, the dynamics of water and sediment movement, contaminants in the stream, or aquatic and riparian communities. Examples include channelization, construction of grade-control structures or check dams, removal of beavers, and placer mining. Human effects can also result from activities within the watershed that indirectly affect streams by altering the movement of water, sediment, and contaminants into the channel. Deforestation, cropping, grazing, land drainage, and urbanization are among the land uses that indirectly alter stream processes. An overview of the relative intensity of human impacts to mountain streams is provided by a table summarizing human effects on each of the major mountainous regions with respect to five categories: flow regulation, biotic integrity, water pollution, channel alteration, and land use. This table indicates that very few mountains have streams not at least moderately affected by land use. The least affected mountainous regions are those at very high or very low latitudes, although our scientific ignorance of conditions in low-latitude mountains in particular means that streams in these mountains might be more altered than is widely recognized. Four case studies from northern Sweden (arctic region), Colorado Front Range (semiarid temperate region), Swiss Alps (humid temperate region), and Papua New Guinea (humid tropics) are also used to explore in detail the history and effects on rivers of human activities in mountainous regions. The overview and case studies indicate that mountain streams must be managed with particular attention to upstream/downstream connections, hillslope

  20. A cost-effective laser scanning method for mapping stream channel geometry and roughness

    Science.gov (United States)

    Lam, Norris; Nathanson, Marcus; Lundgren, Niclas; Rehnström, Robin; Lyon, Steve

    2015-04-01

    In this pilot project, we combine an Arduino Uno and SICK LMS111 outdoor laser ranging camera to acquire high resolution topographic area scans for a stream channel. The microprocessor and imaging system was installed in a custom gondola and suspended from a wire cable system. To demonstrate the systems capabilities for capturing stream channel topography, a small stream (< 2m wide) in the Krycklan Catchment Study was temporarily diverted and scanned. Area scans along the stream channel resulted in a point spacing of 4mm and a point cloud density of 5600 points/m2 for the 5m by 2m area. A grain size distribution of the streambed material was extracted from the point cloud using a moving window, local maxima search algorithm. The median, 84th and 90th percentiles (common metrics to describe channel roughness) of this distribution were found to be within the range of measured values while the largest modelled element was approximately 35% smaller than its measured counterpart. The laser scanning system captured grain sizes between 30mm and 255mm (coarse gravel/pebbles and boulders based on the Wentworth (1922) scale). This demonstrates that our system was capable of resolving both large-scale geometry (e.g. bed slope and stream channel width) and small-scale channel roughness elements (e.g. coarse gravel/pebbles and boulders) for the study area. We further show that the point cloud resolution is suitable for estimating ecohydraulic parameters such as Manning's n and hydraulic radius. Although more work is needed to fine-tune our system's design, these preliminary results are encouraging, specifically for those with a limited operational budget.

  1. A linear programming algorithm to test for jamming in hard-sphere packings

    International Nuclear Information System (INIS)

    Donev, Aleksandar; Torquato, Salvatore.; Stillinger, Frank H.; Connelly, Robert

    2004-01-01

    Jamming in hard-particle packings has been the subject of considerable interest in recent years. In a paper by Torquato and Stillinger [J. Phys. Chem. B 105 (2001)], a classification scheme of jammed packings into hierarchical categories of locally, collectively and strictly jammed configurations has been proposed. They suggest that these jamming categories can be tested using numerical algorithms that analyze an equivalent contact network of the packing under applied displacements, but leave the design of such algorithms as a future task. In this work, we present a rigorous and practical algorithm to assess whether an ideal hard-sphere packing in two or three dimensions is jammed according to the aforementioned categories. The algorithm is based on linear programming and is applicable to regular as well as random packings of finite size with hard-wall and periodic boundary conditions. If the packing is not jammed, the algorithm yields representative multi-particle unjamming motions. Furthermore, we extend the jamming categories and the testing algorithm to packings with significant interparticle gaps. We describe in detail two variants of the proposed randomized linear programming approach to test for jamming in hard-sphere packings. The first algorithm treats ideal packings in which particles form perfect contacts. Another algorithm treats the case of jamming in packings with significant interparticle gaps. This extended algorithm allows one to explore more fully the nature of the feasible particle displacements. We have implemented the algorithms and applied them to ordered as well as random packings of circular disks and spheres with periodic boundary conditions. Some representative results for large disordered disk and sphere packings are given, but more robust and efficient implementations as well as further applications (e.g., non-spherical particles) are anticipated for the future

  2. How and Why Does Stream Water Temperature Vary at Small Spatial Scales in a Headwater Stream?

    Science.gov (United States)

    Morgan, J. C.; Gannon, J. P.; Kelleher, C.

    2017-12-01

    The temperature of stream water is controlled by climatic variables, runoff/baseflow generation, and hyporheic exchange. Hydrologic conditions such as gaining/losing reaches and sources of inflow can vary dramatically along a stream on a small spatial scale. In this work, we attempt to discern the extent that the factors of air temperature, groundwater inflow, and precipitation influence stream temperature at small spatial scales along the length of a stream. To address this question, we measured stream temperature along the perennial stream network in a 43 ha catchment with a complex land use history in Cullowhee, NC. Two water temperature sensors were placed along the stream network on opposite sides of the stream at 100-meter intervals and at several locations of interest (i.e. stream junctions). The forty total sensors recorded the temperature every 10 minutes for one month in the spring and one month in the summer. A subset of sampling locations where stream temperature was consistent or varied from one side of the stream to the other were explored with a thermal imaging camera to obtain a more detailed representation of the spatial variation in temperature at those sites. These thermal surveys were compared with descriptions of the contributing area at the sample sites in an effort to discern specific causes of differing flow paths. Preliminary results suggest that on some branches of the stream stormflow has less influence than regular hyporheic exchange, while other tributaries can change dramatically with stormflow conditions. We anticipate this work will lead to a better understanding of temperature patterns in stream water networks. A better understanding of the importance of small-scale differences in flow paths to water temperature may be able to inform watershed management decisions in the future.

  3. State-Space Equations and the First-Phase Algorithm for Signal Control of Single Intersections

    Institute of Scientific and Technical Information of China (English)

    LI Jinyuan; PAN Xin; WANG Xiqin

    2007-01-01

    State-space equations were applied to formulate the queuing and delay of traffic at a single intersection in this paper. The signal control of a single intersection was then modeled as a discrete-time optimal control problem, with consideration of the constraints of stream conflicts, saturation flow rate, minimum green time, and maximum green time. The problem cannot be solved directly due to the nonlinear constraints.However, the results of qualitative analysis were used to develop a first-phase signal control algorithm. Simulation results show that the algorithm substantially reduces the total delay compared to fixed-time control.

  4. Transformation of nonlinear discrete-time system into the extended observer form

    Science.gov (United States)

    Kaparin, V.; Kotta, Ü.

    2018-04-01

    The paper addresses the problem of transforming discrete-time single-input single-output nonlinear state equations into the extended observer form, which, besides the input and output, also depends on a finite number of their past values. Necessary and sufficient conditions for the existence of both the extended coordinate and output transformations, solving the problem, are formulated in terms of differential one-forms, associated with the input-output equation, corresponding to the state equations. An algorithm for transformation of state equations into the extended observer form is proposed and illustrated by an example. Moreover, the considered approach is compared with the method of dynamic observer error linearisation, which likewise is intended to enlarge the class of systems transformable into an observer form.

  5. Block Least Mean Squares Algorithm over Distributed Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    T. Panigrahi

    2012-01-01

    Full Text Available In a distributed parameter estimation problem, during each sampling instant, a typical sensor node communicates its estimate either by the diffusion algorithm or by the incremental algorithm. Both these conventional distributed algorithms involve significant communication overheads and, consequently, defeat the basic purpose of wireless sensor networks. In the present paper, we therefore propose two new distributed algorithms, namely, block diffusion least mean square (BDLMS and block incremental least mean square (BILMS by extending the concept of block adaptive filtering techniques to the distributed adaptation scenario. The performance analysis of the proposed BDLMS and BILMS algorithms has been carried out and found to have similar performances to those offered by conventional diffusion LMS and incremental LMS algorithms, respectively. The convergence analyses of the proposed algorithms obtained from the simulation study are also found to be in agreement with the theoretical analysis. The remarkable and interesting aspect of the proposed block-based algorithms is that their communication overheads per node and latencies are less than those of the conventional algorithms by a factor as high as the block size used in the algorithms.

  6. CAMS: OLAPing Multidimensional Data Streams Efficiently

    Science.gov (United States)

    Cuzzocrea, Alfredo

    In the context of data stream research, taming the multidimensionality of real-life data streams in order to efficiently support OLAP analysis/mining tasks is a critical challenge. Inspired by this fundamental motivation, in this paper we introduce CAMS (C ube-based A cquisition model for M ultidimensional S treams), a model for efficiently OLAPing multidimensional data streams. CAMS combines a set of data stream processing methodologies, namely (i) the OLAP dimension flattening process, which allows us to obtain dimensionality reduction of multidimensional data streams, and (ii) the OLAP stream aggregation scheme, which aggregates data stream readings according to an OLAP-hierarchy-based membership approach. We complete our analytical contribution by means of experimental assessment and analysis of both the efficiency and the scalability of OLAPing capabilities of CAMS on synthetic multidimensional data streams. Both analytical and experimental results clearly connote CAMS as an enabling component for next-generation Data Stream Management Systems.

  7. Passive and partially active fault tolerance for massively parallel stream processing engines

    DEFF Research Database (Denmark)

    Su, Li; Zhou, Yongluan

    2018-01-01

    . On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE...... also propose effective and efficient algorithms to optimize a partially active replication plan to maximize the quality of tentative outputs. We implemented PPA on top of Storm, an open-source MPSPE and conducted extensive experiments using both real and synthetic datasets to verify the effectiveness...

  8. Extending statistical boosting. An overview of recent methodological developments.

    Science.gov (United States)

    Mayr, A; Binder, H; Gefeller, O; Schmid, M

    2014-01-01

    Boosting algorithms to simultaneously estimate and select predictor effects in statistical models have gained substantial interest during the last decade. This review highlights recent methodological developments regarding boosting algorithms for statistical modelling especially focusing on topics relevant for biomedical research. We suggest a unified framework for gradient boosting and likelihood-based boosting (statistical boosting) which have been addressed separately in the literature up to now. The methodological developments on statistical boosting during the last ten years can be grouped into three different lines of research: i) efforts to ensure variable selection leading to sparser models, ii) developments regarding different types of predictor effects and how to choose them, iii) approaches to extend the statistical boosting framework to new regression settings. Statistical boosting algorithms have been adapted to carry out unbiased variable selection and automated model choice during the fitting process and can nowadays be applied in almost any regression setting in combination with a large amount of different types of predictor effects.

  9. An Extended HITS Algorithm on Bipartite Network for Features Extraction of Online Customer Reviews

    Directory of Open Access Journals (Sweden)

    Chen Liu

    2018-05-01

    Full Text Available How to acquire useful information intelligently in the age of information explosion has become an important issue. In this context, sentiment analysis emerges with the growth of the need of information extraction. One of the most important tasks of sentiment analysis is feature extraction of entities in consumer reviews. This paper first constitutes a directed bipartite feature-sentiment relation network with a set of candidate features-sentiment pairs that is extracted by dependency syntax analysis from consumer reviews. Then, a novel method called MHITS which combines PMI with weighted HITS algorithm is proposed to rank these candidate product features to find out real product features. Empirical experiments indicate the effectiveness of our approach across different kinds and various data sizes of product. In addition, the effect of the proposed algorithm is not the same for the corpus with different proportions of the word pair that includes the “bad”, “good”, “poor”, “pretty good”, “not bad” these general collocation words.

  10. Analysis of groundwater flow and stream depletion in L-shaped fluvial aquifers

    Directory of Open Access Journals (Sweden)

    C.-C. Lin

    2018-04-01

    Full Text Available Understanding the head distribution in aquifers is crucial for the evaluation of groundwater resources. This article develops a model for describing flow induced by pumping in an L-shaped fluvial aquifer bounded by impermeable bedrocks and two nearly fully penetrating streams. A similar scenario for numerical studies was reported in Kihm et al. (2007. The water level of the streams is assumed to be linearly varying with distance. The aquifer is divided into two subregions and the continuity conditions of the hydraulic head and flux are imposed at the interface of the subregions. The steady-state solution describing the head distribution for the model without pumping is first developed by the method of separation of variables. The transient solution for the head distribution induced by pumping is then derived based on the steady-state solution as initial condition and the methods of finite Fourier transform and Laplace transform. Moreover, the solution for stream depletion rate (SDR from each of the two streams is also developed based on the head solution and Darcy's law. Both head and SDR solutions in the real time domain are obtained by a numerical inversion scheme called the Stehfest algorithm. The software MODFLOW is chosen to compare with the proposed head solution for the L-shaped aquifer. The steady-state and transient head distributions within the L-shaped aquifer predicted by the present solution are compared with the numerical simulations and measurement data presented in Kihm et al. (2007.

  11. Fossil imprint of a powerful flare at the galactic center along the Magellanic stream

    Energy Technology Data Exchange (ETDEWEB)

    Bland-Hawthorn, J. [Sydney Institute for Astronomy, School of Physics A28, University of Sydney, NSW 2006 (Australia); Maloney, Philip R. [CASA, University of Colorado, Boulder, CO 80309-0389 (United States); Sutherland, Ralph S. [Mount Stromlo Observatory, Australia National University, Woden, ACT 2611 (Australia); Madsen, G. J. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)

    2013-11-20

    The Fermi satellite discovery of the gamma-ray emitting bubbles extending 50° (10 kpc) from the Galactic center has revitalized earlier claims that our Galaxy has undergone an explosive episode in the recent past. We now explore a new constraint on such activity. The Magellanic Stream is a clumpy gaseous structure free of stars trailing behind the Magellanic Clouds, passing over the south Galactic pole (SGP) at a distance of at least 50-100 kpc from the Galactic center. Several groups have detected faint Hα emission along the Magellanic Stream (1.1 ± 0.3 × 10{sup –18} erg cm{sup –2} s{sup –1} arcsec{sup –2}) which is a factor of five too bright to have been produced by the Galactic stellar population. The brightest emission is confined to a cone with half angle θ{sub 1/2} ≈ 25° roughly centered on the SGP. Time-dependent models of Stream clouds exposed to a flare in ionizing photon flux show that the ionized gas must recombine and cool for a time interval T{sub o} = 0.6 – 2.9 Myr for the emitted Hα surface brightness to drop to the observed level. A nuclear starburst is ruled out by the low star formation rates across the inner Galaxy, and the non-existence of starburst ionization cones in external galaxies extending more than a few kiloparsecs. Sgr A{sup *} is a more likely candidate because it is two orders of magnitude more efficient at converting gas to UV radiation. The central black hole (M {sub •} ≈ 4 × 10{sup 6} M {sub ☉}) can supply the required ionizing luminosity with a fraction of the Eddington accretion rate (f{sub E} ∼ 0.03-0.3, depending on uncertain factors, e.g., Stream distance) typical of Seyfert galaxies. In support of nuclear activity, the Hα emission along the Stream has a polar angle dependence peaking close to the SGP. Moreover, it is now generally accepted that the Stream over the SGP must be farther than the Magellanic Clouds. At the lower halo gas densities, shocks become too ineffective and are unlikely to

  12. Higher-spin cluster algorithms: the Heisenberg spin and U(1) quantum link models

    Energy Technology Data Exchange (ETDEWEB)

    Chudnovsky, V

    2000-03-01

    I discuss here how the highly-efficient spin-1/2 cluster algorithm for the Heisenberg antiferromagnet may be extended to higher-dimensional representations; some numerical results are provided. The same extensions can be used for the U(1) flux cluster algorithm, but have not yielded signals of the desired Coulomb phase of the system.

  13. Higher-spin cluster algorithms: the Heisenberg spin and U(1) quantum link models

    International Nuclear Information System (INIS)

    Chudnovsky, V.

    2000-01-01

    I discuss here how the highly-efficient spin-1/2 cluster algorithm for the Heisenberg antiferromagnet may be extended to higher-dimensional representations; some numerical results are provided. The same extensions can be used for the U(1) flux cluster algorithm, but have not yielded signals of the desired Coulomb phase of the system

  14. A generalized global alignment algorithm.

    Science.gov (United States)

    Huang, Xiaoqiu; Chao, Kun-Mao

    2003-01-22

    Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.

  15. Fragility issues of medical video streaming over 802.11e-WLAN m-health environments.

    Science.gov (United States)

    Tan, Yow-Yiong Edwin; Philip, Nada; Istepanian, Robert H

    2006-01-01

    This paper presents some of the fragility issues of a medical video streaming over 802.11e-WLAN in m-health applications. In particular, we present a medical channel-adaptive fair allocation (MCAFA) scheme for enhanced QoS support for IEEE 802.11 (WLAN), as a modification for the standard 802.11e enhanced distributed coordination function (EDCF) is proposed for enhanced medical data performance. The medical channel-adaptive fair allocation (MCAFA) proposed extends the EDCF, by halving the contention window (CW) after zeta consecutive successful transmissions to reduce the collision probability when channel is busy. Simulation results show that MCAFA outperforms EDCF in-terms of overall performance relevant to the requirements of high throughput of medical data and video streaming traffic in 3G/WLAN wireless environments.

  16. Real-time detection and classification of anomalous events in streaming data

    Science.gov (United States)

    Ferragut, Erik M.; Goodall, John R.; Iannacone, Michael D.; Laska, Jason A.; Harrison, Lane T.

    2016-04-19

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The events can be displayed to a user in user-defined groupings in an animated fashion. The system can include a plurality of anomaly detectors that together implement an algorithm to identify low probability events and detect atypical traffic patterns. The atypical traffic patterns can then be classified as being of interest or not. In one particular example, in a network environment, the classification can be whether the network traffic is malicious or not.

  17. Three-dimensional model of corotating streams in the solar wind 3. Magnetohydrodynamic streams

    International Nuclear Information System (INIS)

    Pizzo, V.J.

    1982-01-01

    The focus of this paper is two-fold: (1) to examine how the presence of the spiral magnetic field affects the evolution of interplanetary corotating solar wind streams, and (2) to ascertain the nature of secondary large-scale phenomena likely to be associated with streams having a pronounced three-dimensional (3-D) structure. The dynamics are presumed to be governed by the nonlinear polytropic, single-fluid, 3-D MHD equations. Solutions are obtained with an explicit, Eulerian, finite differences technique that makes use of a simple form of artificial diffusion for handling shocks. For smooth axisymmetric flows, the picture of magnetically induced meridional motions previously established by linear models requires only minor correction. In the case of broad 3-D streams input near the sun, inclusion of the magnetic field is found to retard the kinematic steepening at the stream front substantially but to produce little deviation from planar flow. For the more realistic case of initially sharply bounded streams, however, it becomes essential to account for magnetic effects in the formulation. Whether a full 3-D treatment is required depends upon the latitudinal geometry of the stream

  18. Gamma-ray streaming in straight pipes and bent ducts

    International Nuclear Information System (INIS)

    Eid, M.; Diop, C.M.; Nimal, J.C.

    1985-04-01

    An important shielding problem is the gamma-ray streaming through voids. These problems are encountered in the reactors and reprocessing plants. A Monte Carlo method has been choosed as one of the most powerful technics to solve this kind of problems. Here in that frame, a biasing system which is adapted for two types of geometries is proposed: straight pipes and bent ducts. A code has been written applying this technique. The numerical results obained show the efficiency and the very good economy of the method proposed. It is hoped to extend the method to deal with more complex geometries and polykinetic situation as well

  19. Cytoplasmic Streaming in the Drosophila Oocyte.

    Science.gov (United States)

    Quinlan, Margot E

    2016-10-06

    Objects are commonly moved within the cell by either passive diffusion or active directed transport. A third possibility is advection, in which objects within the cytoplasm are moved with the flow of the cytoplasm. Bulk movement of the cytoplasm, or streaming, as required for advection, is more common in large cells than in small cells. For example, streaming is observed in elongated plant cells and the oocytes of several species. In the Drosophila oocyte, two stages of streaming are observed: relatively slow streaming during mid-oogenesis and streaming that is approximately ten times faster during late oogenesis. These flows are implicated in two processes: polarity establishment and mixing. In this review, I discuss the underlying mechanism of streaming, how slow and fast streaming are differentiated, and what we know about the physiological roles of the two types of streaming.

  20. Resonance self-shielding methodology of new neutron transport code STREAM

    International Nuclear Information System (INIS)

    Choi, Sooyoung; Lee, Hyunsuk; Lee, Deokjung; Hong, Ser Gi

    2015-01-01

    This paper reports on the development and verification of three new resonance self-shielding methods. The verifications were performed using the new neutron transport code, STREAM. The new methodologies encompass the extension of energy range for resonance treatment, the development of optimum rational approximation, and the application of resonance treatment to isotopes in the cladding region. (1) The extended resonance energy range treatment has been developed to treat the resonances below 4 eV of three resonance isotopes and shows significant improvements in the accuracy of effective cross sections (XSs) in that energy range. (2) The optimum rational approximation can eliminate the geometric limitations of the conventional approach of equivalence theory and can also improve the accuracy of fuel escape probability. (3) The cladding resonance treatment method makes it possible to treat resonances in cladding material which have not been treated explicitly in the conventional methods. These three new methods have been implemented in the new lattice physics code STREAM and the improvement in the accuracy of effective XSs is demonstrated through detailed verification calculations. (author)