WorldWideScience

Sample records for sv detection algorithm

  1. One-mSv CT colonography: Effect of different iterative reconstruction algorithms on radiologists' performance.

    Science.gov (United States)

    Shin, Cheong-Il; Kim, Se Hyung; Im, Jong Pil; Kim, Sang Gyun; Yu, Mi Hye; Lee, Eun Sun; Han, Joon Koo

    2016-03-01

    To analyze the effect of different reconstruction algorithms on image noise and radiologists' performance at ultra-low dose CT colonography (CTC) in human subjects. This retrospective study had institutional review board approval, with waiver of the need to obtain informed consent. CTC and subsequent colonoscopy were performed at the same day in 28 patients. CTC was scanned at the supine/prone positions using 120/100kVp and fixed 10mAs, and reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model-based IR (Veo) algorithms. Size-specific dose estimates (SSDE) and effective radiation doses were recorded. Image noise was compared among the three datasets using repeated measures analysis of variance (ANOVA). Per-polyp sensitivity and figure-of-merits were compared among the datasets using the McNemar test and jackknife alternative free-response receiver operating characteristic (JAFROC) analysis, respectively, by one novice and one expert reviewer in CTC. Mean SSDE and effective radiation dose of CTC were 1.732mGy and 1.002mSv, respectively. Mean image noise at supine/prone position datasets was significantly lowest with Veo (17.2/13.3), followed by ASIR (52.4/38.9) and FBP (69.9/50.8) (Preconstruction (81.0%, 64.3%), followed by ASIR (73.8%, 54.8%) and FBP (57.1%, 50.0%) with statistical significance between Veo and FBP for reader 1 (P=0.002). JAFROC analysis revealed that the figure-of-merit for the detection of polyps was highest with Veo (0.917, 0.786), followed by ASIR (0.881, 0.750) and FBP (0.750, 0.746) with statistical significances between Veo or ASIR and FBP for reader 1 (P<0.05). One-mSv CTC was not feasible using the standard FBP algorithm. However, diagnostic performance expressed as per-polyp sensitivity and figures-of-merit can be improved with the application of IR algorithms, particularly Veo. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  3. THE APPROACHING TRAIN DETECTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    S. V. Bibikov

    2015-09-01

    Full Text Available The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and testing of the train approach alarm device that implements the proposed algorithm. The algorithm is prepared for upgrading the train approach alarm device “Signalizator-P".

  4. MUSIC algorithms for rebar detection

    Science.gov (United States)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  5. A fast meteor detection algorithm

    Science.gov (United States)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  6. Gear Tooth Wear Detection Algorithm

    Science.gov (United States)

    Delgado, Irebert R.

    2015-01-01

    Vibration-based condition indicators continue to be developed for Health Usage Monitoring of rotorcraft gearboxes. Testing performed at NASA Glenn Research Center have shown correlations between specific condition indicators and specific types of gear wear. To speed up the detection and analysis of gear teeth, an image detection program based on the Viola-Jones algorithm was trained to automatically detect spiral bevel gear wear pitting. The detector was tested using a training set of gear wear pictures and a blind set of gear wear pictures. The detector accuracy for the training set was 75 percent while the accuracy for the blind set was 15 percent. Further improvements on the accuracy of the detector are required but preliminary results have shown its ability to automatically detect gear tooth wear. The trained detector would be used to quickly evaluate a set of gear or pinion pictures for pits, spalls, or abrasive wear. The results could then be used to correlate with vibration or oil debris data. In general, the program could be retrained to detect features of interest from pictures of a component taken over a period of time.

  7. Road Anomalies Detection Using Basic Morphological Algorithms

    Directory of Open Access Journals (Sweden)

    Dalia Danilescu

    2015-12-01

    Full Text Available Abstract—In this paper some approaches for pothole detection of roads, using morphological algorithms, are recalled and tested. For road anomalies detection, one of the key elements is the pavement pothole information. Any algorithm for pothole detection has certain advantages and limitations as well, due to the real world environment, which is highly unstructured and dynamic. For road segmentation, the road anomalies detection algorithm based on skeletonization is used.

  8. Road Anomalies Detection Using Basic Morphological Algorithms

    OpenAIRE

    Dalia Danilescu; Alexandru Lodin; Lăcrimioara Grama; Corneliu Rusu

    2015-01-01

    Abstract—In this paper some approaches for pothole detection of roads, using morphological algorithms, are recalled and tested. For road anomalies detection, one of the key elements is the pavement pothole information. Any algorithm for pothole detection has certain advantages and limitations as well, due to the real world environment, which is highly unstructured and dynamic. For road segmentation, the road anomalies detection algorithm based on skeletonization is used.

  9. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  10. Performance Analysis of Cone Detection Algorithms

    CERN Document Server

    Mariotti, Letizia

    2015-01-01

    Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of two popular cone detection algorithms and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the three algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimat...

  11. Formal verification of a deadlock detection algorithm

    Directory of Open Access Journals (Sweden)

    Freek Verbeek

    2011-10-01

    Full Text Available Deadlock detection is a challenging issue in the analysis and design of on-chip networks. We have designed an algorithm to detect deadlocks automatically in on-chip networks with wormhole switching. The algorithm has been specified and proven correct in ACL2. To enable a top-down proof methodology, some parts of the algorithm have been left unimplemented. For these parts, the ACL2 specification contains constrained functions introduced with defun-sk. We used single-threaded objects to represent the data structures used by the algorithm. In this paper, we present details on the proof of correctness of the algorithm. The process of formal verification was crucial to get the algorithm flawless. Our ultimate objective is to have an efficient executable, and formally proven correct implementation of the algorithm running in ACL2.

  12. Smell Detection Agent Based Optimization Algorithm

    Science.gov (United States)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  13. Detection of Illegitimate Emails using Boosting Algorithm

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    In this paper, we report on experiments to detect illegitimate emails using boosting algorithm. We call an email illegitimate if it is not useful for the receiver or for the society. We have divided the problem into two major areas of illegitimate email detection: suspicious email detection...... and spam email detection. For our desired task, we have applied a boosting technique. With the use of boosting we can achieve high accuracy of traditional classification algorithms. When using boosting one has to choose a suitable weak learner as well as the number of boosting iterations. In this paper, we...... propose suitable weak learners and parameter settings for the boosting algorithm for the desired task. We have initially analyzed the problem using base learners. Then we have applied boosting algorithm with suitable weak learners and parameter settings such as the number of boosting iterations. We...

  14. SV-AUTOPILOT: optimized, automated construction of structural variation discovery and benchmarking pipelines.

    Science.gov (United States)

    Leung, Wai Yi; Marschall, Tobias; Paudel, Yogesh; Falquet, Laurent; Mei, Hailiang; Schönhuth, Alexander; Maoz Moss, Tiffanie Yael

    2015-03-25

    Many tools exist to predict structural variants (SVs), utilizing a variety of algorithms. However, they have largely been developed and tested on human germline or somatic (e.g. cancer) variation. It seems appropriate to exploit this wealth of technology available for humans also for other species. Objectives of this work included: a) Creating an automated, standardized pipeline for SV prediction. b) Identifying the best tool(s) for SV prediction through benchmarking. c) Providing a statistically sound method for merging SV calls. The SV-AUTOPILOT meta-tool platform is an automated pipeline for standardization of SV prediction and SV tool development in paired-end next-generation sequencing (NGS) analysis. SV-AUTOPILOT comes in the form of a virtual machine, which includes all datasets, tools and algorithms presented here. The virtual machine easily allows one to add, replace and update genomes, SV callers and post-processing routines and therefore provides an easy, out-of-the-box environment for complex SV discovery tasks. SV-AUTOPILOT was used to make a direct comparison between 7 popular SV tools on the Arabidopsis thaliana genome using the Landsberg (Ler) ecotype as a standardized dataset. Recall and precision measurements suggest that Pindel and Clever were the most adaptable to this dataset across all size ranges while Delly performed well for SVs larger than 250 nucleotides. A novel, statistically-sound merging process, which can control the false discovery rate, reduced the false positive rate on the Arabidopsis benchmark dataset used here by >60%. SV-AUTOPILOT provides a meta-tool platform for future SV tool development and the benchmarking of tools on other genomes using a standardized pipeline. It optimizes detection of SVs in non-human genomes using statistically robust merging. The benchmarking in this study has demonstrated the power of 7 different SV tools for analyzing different size classes and types of structural variants. The optional merge

  15. Seizure detection algorithms based on EMG signals

    DEFF Research Database (Denmark)

    Conradsen, Isa

    Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective......: to show whether medical signal processing of EMG data is feasible for detection of epileptic seizures. Methods: EMG signals during generalised seizures were recorded from 3 patients (with 20 seizures in total). Two possible medical signal processing algorithms were tested. The first algorithm was based...... the frequency-based algorithm was efficient for detecting the seizures in the third patient. Conclusion: Our results suggest that EMG signals could be used to develop an automatic seizuredetection system. However, different patients might require different types of algorithms /approaches....

  16. Nearest Neighbour Corner Points Matching Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Changlong

    2015-01-01

    Full Text Available Accurate detection towards the corners plays an important part in camera calibration. To deal with the instability and inaccuracies of present corner detection algorithm, the nearest neighbour corners match-ing detection algorithms was brought forward. First, it dilates the binary image of the photographed pictures, searches and reserves quadrilateral outline of the image. Second, the blocks which accord with chess-board-corners are classified into a class. If too many blocks in class, it will be deleted; if not, it will be added, and then let the midpoint of the two vertex coordinates be the rough position of corner. At last, it precisely locates the position of the corners. The Experimental results have shown that the algorithm has obvious advantages on accuracy and validity in corner detection, and it can give security for camera calibration in traffic accident measurement.

  17. Lightning detection and exposure algorithms for smartphones

    Science.gov (United States)

    Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining

    2015-05-01

    This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.

  18. On Dijkstra's Algorithm for Deadlock Detection

    Science.gov (United States)

    Li, Youming; Greca, Ardian; Harris, James

    We study a classical problem in operating systems concerning deadlock detection for systems with reusable resources. The elegant Dijkstra's algorithm utilizes simple data structures, but it has the cost of quadratic dependence on the number of the processes. Our goal is to reduce the cost in an optimal way without losing the simplicity of the data structures. More specifically, we present a graph-free and almost optimal algorithm with the cost of linear dependence on the number of the processes, when the number of resources is fixed and when the units of requests for resources are bounded by constants.

  19. Test of TEDA, Tsunami Early Detection Algorithm

    Science.gov (United States)

    Bressan, Lidia; Tinti, Stefano

    2010-05-01

    Tsunami detection in real-time, both offshore and at the coastline, plays a key role in Tsunami Warning Systems since it provides so far the only reliable and timely proof of tsunami generation, and is used to confirm or cancel tsunami warnings previously issued on the basis of seismic data alone. Moreover, in case of submarine or coastal landslide generated tsunamis, which are not announced by clear seismic signals and are typically local, real-time detection at the coastline might be the fastest way to release a warning, even if the useful time for emergency operations might be limited. TEDA is an algorithm for real-time detection of tsunami signal on sea-level records, developed by the Tsunami Research Team of the University of Bologna. The development and testing of the algorithm has been accomplished within the framework of the Italian national project DPC-INGV S3 and the European project TRANSFER. The algorithm is to be implemented at station level, and it is based therefore only on sea-level data of a single station, either a coastal tide-gauge or an offshore buoy. TEDA's principle is to discriminate the first tsunami wave from the previous background signal, which implies the assumption that the tsunami waves introduce a difference in the previous sea-level signal. Therefore, in TEDA the instantaneous (most recent) and the previous background sea-level elevation gradients are characterized and compared by proper functions (IS and BS) that are updated at every new data acquisition. Detection is triggered when the instantaneous signal function passes a set threshold and at the same time it is significantly bigger compared to the previous background signal. The functions IS and BS depend on temporal parameters that allow the algorithm to be adapted different situations: in general, coastal tide-gauges have a typical background spectrum depending on the location where the instrument is installed, due to local topography and bathymetry, while offshore buoys are

  20. Detection of Cheating by Decimation Algorithm

    Science.gov (United States)

    Yamanaka, Shogo; Ohzeki, Masayuki; Decelle, Aurélien

    2015-02-01

    We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects.

  1. Adaptive algorithm of magnetic heading detection

    Science.gov (United States)

    Liu, Gong-Xu; Shi, Ling-Feng

    2017-11-01

    Magnetic data obtained from a magnetic sensor usually fluctuate in a certain range, which makes it difficult to estimate the magnetic heading accurately. In fact, magnetic heading information is usually submerged in noise because of all kinds of electromagnetic interference and the diversity of the pedestrian’s motion states. In order to solve this problem, a new adaptive algorithm based on the (typically) right-angled corridors of a building or residential buildings is put forward to process heading information. First, a 3D indoor localization platform is set up based on MPU9250. Then, several groups of data are measured by changing the experimental environment and pedestrian’s motion pace. The raw data from the attached inertial measurement unit are calibrated and arranged into a time-stamped array and written to a data file. Later, the data file is imported into MATLAB for processing and analysis using the proposed adaptive algorithm. Finally, the algorithm is verified by comparison with the existing algorithm. The experimental results show that the algorithm has strong robustness and good fault tolerance, which can detect the heading information accurately and in real-time.

  2. Benchmark graphs for testing community detection algorithms

    Science.gov (United States)

    Lancichinetti, Andrea; Fortunato, Santo; Radicchi, Filippo

    2008-10-01

    Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.

  3. Genome-Wide Detection of SNP and SV Variations to Reveal Early Ripening-Related Genes in Grape.

    Directory of Open Access Journals (Sweden)

    Yanshuai Xu

    Full Text Available Early ripening in grape (Vitis vinifera L. is a crucial agronomic trait. The fruits of the grape line 'Summer Black' (SBBM, which contains a bud mutation, can be harvested approximately one week earlier than the 'Summer Black' (SBCcontrol. To investigate the molecular mechanism of the bud mutation related to early ripening, we detected genome-wide genetic variations based on re-sequencing. In total, 3,692,777 single nucleotide polymorphisms (SNPs and 81,223 structure variations (SVs in the SBC genome and 3,823,464 SNPs and 85,801 SVs in the SBBM genome were detected compared with the reference grape sequence. Of these, 635 SBC-specific genes and 665 SBBM-specific genes were screened. Ripening and colour-associated unigenes with non-synonymous mutations (NS, SVs or frame-shift mutations (F were analysed. The results showed that 90 unigenes in SBC, 76 unigenes in SBBM and 13 genes that mapped to large fragment indels were filtered. The expression patterns of eight genes were confirmed using quantitative reverse transcription-polymerase chain reaction (qRT-PCR.The re-sequencing data showed that 635 SBC-specific genes and 665 SBBM-specific genes associated with early ripening were screened. Among these, NCED6 expression appears to be related to NCED1 and is involved in ABA biosynthesis in grape, which might play a role in the onset of anthocyanin accumulation. The SEP and ERF genes probably play roles in ethylene response.

  4. Algorithms for Anomaly Detection - Lecture 2

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The concept of statistical anomalies, or outliers, has fascinated experimentalists since the earliest attempts to interpret data. We want to know why some data points don’t seem to belong with the others: perhaps we want to eliminate spurious or unrepresentative data from our model. Or, the anomalies themselves may be what we are interested in: an outlier could represent the symptom of a disease, an attack on a computer network, a scientific discovery, or even an unfaithful partner. We start with some general considerations, such as the relationship between clustering and anomaly detection, the choice between supervised and unsupervised methods, and the difference between global and local anomalies. Then we will survey the most representative anomaly detection algorithms, highlighting what kind of data each approach is best suited to, and discussing their limitations. We will finish with a discussion of the difficulties of anomaly detection in high-dimensional data and some new directions for anomaly detec...

  5. Algorithms for Anomaly Detection - Lecture 1

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The concept of statistical anomalies, or outliers, has fascinated experimentalists since the earliest attempts to interpret data. We want to know why some data points don’t seem to belong with the others: perhaps we want to eliminate spurious or unrepresentative data from our model. Or, the anomalies themselves may be what we are interested in: an outlier could represent the symptom of a disease, an attack on a computer network, a scientific discovery, or even an unfaithful partner. We start with some general considerations, such as the relationship between clustering and anomaly detection, the choice between supervised and unsupervised methods, and the difference between global and local anomalies. Then we will survey the most representative anomaly detection algorithms, highlighting what kind of data each approach is best suited to, and discussing their limitations. We will finish with a discussion of the difficulties of anomaly detection in high-dimensional data and some new directions for anomaly detec...

  6. Development of a Model Fire Detection Algorithm

    Directory of Open Access Journals (Sweden)

    O. S. Ismail

    2017-06-01

    Full Text Available An adaptive model algorithm for fire (flickering flame, in the infrared region detection is presented. The model made use of a Pyro-electric infrared sensor (PIR/Passive Infrared Detector (PID for infrared fire detection. Sample analog signals (around flame flicker region were generated and simulated within the framework of the modeled PIR sensor/PID. A Joint Time Frequency Analysis (JTFA function, specifically the Discrete Wavelet Transform (DWT was applied to model the Digital Signal Processing (DSP of the generated signals. It was implemented as wavelet analyzer filters for “fire” and “non-fire” feature extraction. A Piecewise Modified Artificial Neural Network (PMANN and the Intraclass Correlation Coefficient (ICC were employed in the decision rule. The PMANN generated polynomials which analyzed and ‘memorized’ the signals from DSP. The ICC further categorized cases as 'fire' or 'non-fire' by comparing data from the PMANN analyzed. The results show that the model algorithm can be implemented in modern day fire detectors and be used to detect industrial hydrocarbon fires with fewer false alarms than smoke detectors or ultraviolet detectors.

  7. Botnet Propagation Via Public Websited Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Jonas Juknius

    2011-08-01

    Full Text Available The networks of compromised and remotely controlled computers (bots are widely used in many Internet fraudulent activities, especially in the distributed denial of service attacks. Brute force gives enormous power to bot masters and makes botnet traffic visible; therefore, some countermeasures might be applied at early stages. Our study focuses on detecting botnet propagation via public websites. The provided algorithm might help with preventing from massive infections when popular web sites are compromised without spreading visual changes used for malware in botnets.Article in English

  8. Photon Counting Using Edge-Detection Algorithm

    Science.gov (United States)

    Gin, Jonathan W.; Nguyen, Danh H.; Farr, William H.

    2010-01-01

    New applications such as high-datarate, photon-starved, free-space optical communications require photon counting at flux rates into gigaphoton-per-second regimes coupled with subnanosecond timing accuracy. Current single-photon detectors that are capable of handling such operating conditions are designed in an array format and produce output pulses that span multiple sample times. In order to discern one pulse from another and not to overcount the number of incoming photons, a detection algorithm must be applied to the sampled detector output pulses. As flux rates increase, the ability to implement such a detection algorithm becomes difficult within a digital processor that may reside within a field-programmable gate array (FPGA). Systems have been developed and implemented to both characterize gigahertz bandwidth single-photon detectors, as well as process photon count signals at rates into gigaphotons per second in order to implement communications links at SCPPM (serial concatenated pulse position modulation) encoded data rates exceeding 100 megabits per second with efficiencies greater than two bits per detected photon. A hardware edge-detection algorithm and corresponding signal combining and deserialization hardware were developed to meet these requirements at sample rates up to 10 GHz. The photon discriminator deserializer hardware board accepts four inputs, which allows for the ability to take inputs from a quadphoton counting detector, to support requirements for optical tracking with a reduced number of hardware components. The four inputs are hardware leading-edge detected independently. After leading-edge detection, the resultant samples are ORed together prior to deserialization. The deserialization is performed to reduce the rate at which data is passed to a digital signal processor, perhaps residing within an FPGA. The hardware implements four separate analog inputs that are connected through RF connectors. Each analog input is fed to a high-speed 1

  9. Climate Data Homogenization Using Edge Detection Algorithms

    Science.gov (United States)

    Hammann, A. C.; Rennermalm, A. K.

    2015-12-01

    The problem of climate data homogenization has predominantly been addressed by testing the likelihood of one or more breaks inserted into a given time series and modeling the mean to be stationary in between the breaks. We recast the same problem in a slightly different form: that of detecting step-like changes in noisy data, and observe that this problem has spawned a large number of approaches to its solution as the "edge detection" problem in image processing. With respect to climate data, we ask the question: How can we optimally separate step-like from smoothly-varying low-frequency signals? We study the hypothesis that the edge-detection approach makes better use of all information contained in the time series than the "traditional" approach (e.g. Caussinus and Mestre, 2004), which we base on several observations. 1) The traditional formulation of the problem reduces the available information from the outset to that contained in the test statistic. 2) The criterion of local steepness of the low-frequency variability, while at least hypothetically useful, is ignored. 3) The practice of using monthly data corresponds, mathematically, to applying a moving average filter (to reduce noise) and subsequent subsampling of the result; this subsampling reduces the amount of available information beyond what is necessary for noise reduction. Most importantly, the tradeoff between noise reduction (better with filters with wide support in the time domain) and localization of detected changes (better with filters with narrow support) is expressed in the well-known uncertainty principle and can be addressed optimally within a time-frequency framework. Unsurprisingly, a large number of edge-detection algorithms have been proposed that make use of wavelet decompositions and similar techniques. We are developing this framework in part to be applied to a particular set of climate data from Greenland; we will present results from this application as well as from tests with

  10. Detecting Hijacked Journals by Using Classification Algorithms.

    Science.gov (United States)

    Andoohgin Shahri, Mona; Jazi, Mohammad Davarpanah; Borchardt, Glenn; Dadkhah, Mehdi

    2017-04-10

    Invalid journals are recent challenges in the academic world and many researchers are unacquainted with the phenomenon. The number of victims appears to be accelerating. Researchers might be suspicious of predatory journals because they have unfamiliar names, but hijacked journals are imitations of well-known, reputable journals whose websites have been hijacked. Hijacked journals issue calls for papers via generally laudatory emails that delude researchers into paying exorbitant page charges for publication in a nonexistent journal. This paper presents a method for detecting hijacked journals by using a classification algorithm. The number of published articles exposing hijacked journals is limited and most of them use simple techniques that are limited to specific journals. Hence we needed to amass Internet addresses and pertinent data for analyzing this type of attack. We inspected the websites of 104 scientific journals by using a classification algorithm that used criteria common to reputable journals. We then prepared a decision tree that we used to test five journals we knew were authentic and five we knew were hijacked.

  11. An efficient and fast detection algorithm for multimode FBG sensing

    DEFF Research Database (Denmark)

    Ganziy, Denis; Jespersen, O.; Rose, B.

    2015-01-01

    We propose a novel dynamic gate algorithm (DGA) for fast and accurate peak detection. The algorithm uses threshold determined detection window and Center of gravity algorithm with bias compensation. We analyze the wavelength fit resolution of the DGA for different values of signal to noise ratio ...

  12. Anomaly Detection using the "Isolation Forest" algorithm

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Anomaly detection can provide clues about an outlying minority class in your data: hackers in a set of network events, fraudsters in a set of credit card transactions, or exotic particles in a set of high-energy collisions. In this talk, we analyze a real dataset of breast tissue biopsies, with malignant results forming the minority class. The "Isolation Forest" algorithm finds anomalies by deliberately “overfitting” models that memorize each data point. Since outliers have more empty space around them, they take fewer steps to memorize. Intuitively, a house in the country can be identified simply as “that house out by the farm”, while a house in the city needs a longer description like “that house in Brooklyn, near Prospect Park, on Union Street, between the firehouse and the library, not far from the French restaurant”. We first use anomaly detection to find outliers in the biopsy data, then apply traditional predictive modeling to discover rules that separate anomalies from normal data...

  13. SA-SOM algorithm for detecting communities in complex networks

    Science.gov (United States)

    Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang

    2017-10-01

    Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.

  14. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    Science.gov (United States)

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  15. An algorithm Walktrap-SPM for detecting overlapping community structure

    Science.gov (United States)

    Hu, Fang; Zhu, Youze; Shi, Yuan; Cai, Jianchao; Chen, Luogeng; Shen, Shaowu

    2017-06-01

    In this paper, based on Walktrap algorithm with the idea of random walk, and by selecting the neighbor communities, introducing improved signed probabilistic mixture (SPM) model and considering the edges within the community as positive links and the edges between the communities as negative links, a novel algorithm Walktrap-SPM for detecting overlapping community is proposed. This algorithm not only can identify the overlapping communities, but also can greatly increase the objectivity and accuracy of the results. In order to verify the accuracy, the performance of this algorithm is tested on several representative real-world networks and a set of computer-generated networks based on LFR benchmark. The experimental results indicate that this algorithm can identify the communities accurately, and it is more suitable for overlapping community detection. Compared with Walktrap, SPM and LMF algorithms, the presented algorithm can acquire higher values of modularity and NMI. Moreover, this new algorithm has faster running time than SPM and LMF algorithms.

  16. A new real-time tsunami detection algorithm

    Science.gov (United States)

    Chierici, Francesco; Embriaco, Davide; Pignagnoli, Luca

    2017-01-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection based on the real-time tide removal and real-time band-pass filtering of seabed pressure recordings. The algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. Pressure data sets acquired by Bottom Pressure Recorders in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event which occurred at Haida Gwaii on 28 October 2012 using data recorded by the Bullseye underwater node of Ocean Networks Canada. The algorithm successfully ran for test purpose in year-long missions onboard abyssal observatories, deployed in the Gulf of Cadiz and in the Western Ionian Sea.

  17. Particle detection algorithms for complex plasmas

    Science.gov (United States)

    Mohr, D. P.; Knapek, C. A.; Huber, P.; Zaehringer, E.

    2018-01-01

    The micrometer-sized particles in a complex plasma can be directly visualized and recorded by digital video cameras. To analyze the dynamics of single particles, reliable algorithms are required to accurately determine their positions to sub-pixel accuracy from the recorded images. Here, we combine the algorithms with common techniques for image processing, and we study several algorithms, pre- and post-processing methods, and the impact of the choice of threshold parameters.

  18. A Vehicle Detection Algorithm Based on Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Hai Wang

    2014-01-01

    Full Text Available Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets.

  19. A community detection algorithm based on structural similarity

    Science.gov (United States)

    Guo, Xuchao; Hao, Xia; Liu, Yaqiong; Zhang, Li; Wang, Lu

    2017-09-01

    In order to further improve the efficiency and accuracy of community detection algorithm, a new algorithm named SSTCA (the community detection algorithm based on structural similarity with threshold) is proposed. In this algorithm, the structural similarities are taken as the weights of edges, and the threshold k is considered to remove multiple edges whose weights are less than the threshold, and improve the computational efficiency. Tests were done on the Zachary’s network, Dolphins’ social network and Football dataset by the proposed algorithm, and compared with GN and SSNCA algorithm. The results show that the new algorithm is superior to other algorithms in accuracy for the dense networks and the operating efficiency is improved obviously.

  20. Local Community Detection Algorithm Based on Minimal Cluster

    Directory of Open Access Journals (Sweden)

    Yong Zhou

    2016-01-01

    Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.

  1. GARD: a genetic algorithm for recombination detection

    National Research Council Canada - National Science Library

    Kosakovsky Pond, Sergei L; Posada, David; Gravenor, Michael B; Woelk, Christopher H; Frost, Simon D W

    2006-01-01

    .... We developed a likelihood-based model selection procedure that uses a genetic algorithm to search multiple sequence alignments for evidence of recombination breakpoints and identify putative recombinant sequences...

  2. ANOMALY DETECTION IN NETWORKING USING HYBRID ARTIFICIAL IMMUNE ALGORITHM

    Directory of Open Access Journals (Sweden)

    D. Amutha Guka

    2012-01-01

    Full Text Available Especially in today’s network scenario, when computers are interconnected through internet, security of an information system is very important issue. Because no system can be absolutely secure, the timely and accurate detection of anomalies is necessary. The main aim of this research paper is to improve the anomaly detection by using Hybrid Artificial Immune Algorithm (HAIA which is based on Artificial Immune Systems (AIS and Genetic Algorithm (GA. In this research work, HAIA approach is used to develop Network Anomaly Detection System (NADS. The detector set is generated by using GA and the anomalies are identified using Negative Selection Algorithm (NSA which is based on AIS. The HAIA algorithm is tested with KDD Cup 99 benchmark dataset. The detection rate is used to measure the effectiveness of the NADS. The results and consistency of the HAIA are compared with earlier approaches and the results are presented. The proposed algorithm gives best results when compared to the earlier approaches.

  3. Theoretical foundations of NRL spectral target detection algorithms.

    Science.gov (United States)

    Schaum, Alan

    2015-11-01

    The principal spectral detection algorithms developed at the Naval Research Laboratory (NRL) over the past 20 years for use in operational systems are described. These include anomaly detectors, signature-based methods, and techniques for anomalous change detection. Newer derivations are provided that have motivated more recent work. Mathematical methods facilitating the use of forward models for the prediction of spectral signature statistics are described and a detection algorithm is derived for ocean surveillance that is based on principles of clairvoyant fusion.

  4. SV-AUTOPILOT

    NARCIS (Netherlands)

    Leung, Wai Yi; Marschall, Tobias; Paudel, Yogesh; Falquet, Laurent; Mei, Hailiang; Schönhuth, Alexander; Maoz Moss, Tiffanie Yael

    2015-01-01

    Background: Many tools exist to predict structural variants (SVs), utilizing a variety of algorithms. However, they have largely been developed and tested on human germline or somatic (e.g. cancer) variation. It seems appropriate to exploit this wealth of technology available for humans also for

  5. Virus-specific nucleic acids in SV40-exposed hamster embryo cell lines: correlation with S and T antigens.

    Science.gov (United States)

    Levin, M J; Oxman, M N; Diamandopoulos, G T; Levine, A S; Henry, P H; Enders, J F

    1969-02-01

    A number of homologous SV40-exposed hamster embryonic cell lines were examined for the presence of RNA complementary to SV40 DNA. Only those lines containing the SV40 T antigen were found to have such virus-specific RNA. In lines containing the SV40 S antigen, but not the SV40 T antigen, virus-specific RNA was not detected. These findings suggest that the S antigen is not coded for directly by the SV40 genome.

  6. VIRUS-SPECIFIC NUCLEIC ACIDS IN SV40-EXPOSED HAMSTER EMBRYO CELL LINES: CORRELATION WITH S AND T ANTIGENS*

    Science.gov (United States)

    Levin, Myron J.; Oxman, Michael N.; Diamandopoulos, George Th.; Levine, Arthur S.; Henry, Patrick H.; Enders, John F.

    1969-01-01

    A number of homologous SV40-exposed hamster embryonic cell lines were examined for the presence of RNA complementary to SV40 DNA. Only those lines containing the SV40 T antigen were found to have such virus-specific RNA. In lines containing the SV40 S antigen, but not the SV40 T antigen, virus-specific RNA was not detected. These findings suggest that the S antigen is not coded for directly by the SV40 genome. PMID:4307716

  7. Algorithms and architectures for low power spike detection and alignment

    Science.gov (United States)

    Zviagintsev, Alex; Perelman, Yevgeny; Ginosar, Ran

    2006-03-01

    We introduce algorithms and architectures for automatic spike detection and alignment that are designed for low power. Some of the algorithms are based on principal component analysis (PCA). Others employ a novel integral transform analysis and achieve 99% of the precision of a PCA detector, while requiring only 0.05% of the computational complexity. The algorithms execute autonomously, but require off-line training and setting of computational parameters. We employ pre-recorded neuronal signals to evaluate the accuracy of the proposed algorithms and architectures: the recorded data are processed by a standard PCA spike detection and alignment software algorithm, as well as by the several hardware algorithms, and the outcomes are compared.

  8. Analysis of Community Detection Algorithms for Large Scale Cyber Networks

    Energy Technology Data Exchange (ETDEWEB)

    Mane, Prachita; Shanbhag, Sunanda; Kamath, Tanmayee; Mackey, Patrick S.; Springer, John

    2016-09-30

    The aim of this project is to use existing community detection algorithms on an IP network dataset to create supernodes within the network. This study compares the performance of different algorithms on the network in terms of running time. The paper begins with an introduction to the concept of clustering and community detection followed by the research question that the team aimed to address. Further the paper describes the graph metrics that were considered in order to shortlist algorithms followed by a brief explanation of each algorithm with respect to the graph metric on which it is based. The next section in the paper describes the methodology used by the team in order to run the algorithms and determine which algorithm is most efficient with respect to running time. Finally, the last section of the paper includes the results obtained by the team and a conclusion based on those results as well as future work.

  9. An improved edge detection algorithm for depth map inpainting

    Science.gov (United States)

    Chen, Weihai; Yue, Haosong; Wang, Jianhua; Wu, Xingming

    2014-04-01

    Three-dimensional (3D) measurement technology has been widely used in many scientific and engineering areas. The emergence of Kinect sensor makes 3D measurement much easier. However the depth map captured by Kinect sensor has some invalid regions, especially at object boundaries. These missing regions should be filled firstly. This paper proposes a depth-assisted edge detection algorithm and improves existing depth map inpainting algorithm using extracted edges. In the proposed algorithm, both color image and raw depth data are used to extract initial edges. Then the edges are optimized and are utilized to assist depth map inpainting. Comparative experiments demonstrate that the proposed edge detection algorithm can extract object boundaries and inhibit non-boundary edges caused by textures on object surfaces. The proposed depth inpainting algorithm can predict missing depth values successfully and has better performance than existing algorithm around object boundaries.

  10. Analysing Threshold Value in Fire Detection Algorithm Using MODIS Data

    Directory of Open Access Journals (Sweden)

    Bowo E. Cahyono

    2012-08-01

    Full Text Available MODIS instruments have been designed to include special channels for fire monitoring by adding more spectral thermal band detector on them. The basic understanding of remote sensing fire detection should be kept in mind to be able to improve the algorithm for regional scale detection purposes. It still gives many chances for more exploration. This paper describe the principle of fire investigation applied on MODIS data. The main used algorithm in this research is contextual algorithm which has been developed by NASA scientist team. By varying applied threshold of T4 value in the range of 320-360K it shows that detected fire is significantly changed. While significant difference of detected FHS by changing ∆T threshold value is occurred in the range of 15-35K. Improve and adjustment of fire detection algorithm is needed to get the best accuracy result proper to local or regional conditions. MOD14 algorithm is applied threshold values of 325K for T4 and 20K for ∆T. Validation has been done from the algorithm result of MODIS dataset over Indonesia and South Africa. The accuracy of MODIS fire detection by MOD14 algorithm is 73.2% and 91.7% on MODIS data over Sumatra-Borneo and South Africa respectively

  11. Extended Approximate String Matching Algorithms To Detect Name Aliases

    DEFF Research Database (Denmark)

    Shaikh, Muniba; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    that measure the similarities between two strings (i.e., the name and alias). ASM algorithms work well to detect various type of orthographic variations but still there is a need to develop techniques to detect correct aliases of Arabic names that occur due to the translation of Arabic names into English......This paper focuses on the problem of alias detection based on orthographic variations of Arabic names. Alias detection is the process to identify different variants of the same name. To detect aliases based on orthographic variations, the approximate string matching (ASM) algorithms are widely used...

  12. Improvement and implementation for Canny edge detection algorithm

    Science.gov (United States)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  13. An optimized rail crack detection algorithm based on population status

    Science.gov (United States)

    Zhao, Jiao; Gao, Ruipeng; Yang, Yuxiang; Wang, Bing

    To improve efficiency and accuracy of wavelet packet decomposition method modified by simple genetic algorithm (SGA), a novel genetic algorithm, which is based on variance of population and population entropy, is proposed. And then wavelet packet decomposition method is optimized by this algorithm to detect rail cracks. In the optimized method, internal state of population and population diversity are linked up with evolutionary operations to adjust crossover-mutation operators of genetic algorithm. Further, a mathematical model describing fault signal is established, and its parameters are optimized to effectively extract information. The proposed algorithm was tested by test functions and simulated fault signals of rail cracks. The results about simulated fault signals show that convergence probability of proposed algorithm — at best — is 45% higher than that of SGA and 28% higher than that of improved adaptive genetic algorithm (IAGA), and accuracy of crack fault detection reaches above 92%. Meanwhile, the proposed algorithm isn’t prone to stagnation and has fast convergence speed and high accuracy of fault detection. This research not only improves performance of SGA, but also provides a new detection method for fault diagnosis of wheel-rail noise.

  14. Practical Algorithms for Subgroup Detection in Covert Networks

    DEFF Research Database (Denmark)

    Memon, Nasrullah; Wiil, Uffe Kock; Qureshi, Pir Abdul Rasool

    2010-01-01

    In this paper, we present algorithms for subgroup detection and demonstrated them with a real-time case study of USS Cole bombing terrorist network. The algorithms are demonstrated in an application by a prototype system. The system finds associations between terrorist and terrorist organisations...

  15. A Comparative Analysis of Community Detection Algorithms on Artificial Networks.

    Science.gov (United States)

    Yang, Zhao; Algesheimer, René; Tessone, Claudio J

    2016-08-01

    Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms' computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm's predicting power and the effective computing time.

  16. Anomaly Detection and Diagnosis Algorithms for Discrete Symbols

    Data.gov (United States)

    National Aeronautics and Space Administration — We present a set of novel algorithms which we call sequenceMiner that detect and characterize anomalies in large sets of high-dimensional symbol sequences that arise...

  17. Aquarius RFI Detection and Mitigation Algorithm: Assessment and Examples

    Science.gov (United States)

    Le Vine, David M.; De Matthaeis, P.; Ruf, Christopher S.; Chen, D. D.

    2013-01-01

    Aquarius is an L-band radiometer system designed to map sea surface salinity from space. This is a sensitive measurement, and protection from radio frequency interference (RFI) is important for success. An initial look at the performance of the Aquarius RFI detection and mitigation algorithm is reported together with examples of the global distribution of RFI at the L-band. To protect against RFI, Aquarius employs rapid sampling (10 ms) and a "glitch" detection algorithm that looks for outliers among the samples. Samples identified as RFI are removed, and the remainder is averaged to produce an RFI-free signal for the salinity retrieval algorithm. The RFI detection algorithm appears to work well over the ocean with modest rates for false alarms (5%) and missed detection. The global distribution of RFI coincides well with population centers and is consistent with observations reported by the Soil Moisture and Ocean Salinity mission.

  18. Multi-objective community detection based on memetic algorithm

    National Research Council Canada - National Science Library

    Wu, Peng; Pan, Li

    2015-01-01

    .... In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective...

  19. An Automated Energy Detection Algorithm Based on Consecutive Mean Excision

    Science.gov (United States)

    2018-01-01

    ARL-TR-8268 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Consecutive Mean Excision...not return it to the originator. ARL-TR-8268 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm...2018 2. REPORT TYPE Technical Report 3. DATES COVERED (From - To) 1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy

  20. Plagiarism Detection Based on SCAM Algorithm

    DEFF Research Database (Denmark)

    Anzelmi, Daniele; Carlone, Domenico; Rizzello, Fabio

    2011-01-01

    Plagiarism is a complex problem and considered one of the biggest in publishing of scientific, engineering and other types of documents. Plagiarism has also increased with the widespread use of the Internet as large amount of digital data is available. Plagiarism is not just direct copy but also...... paraphrasing, rewording, adapting parts, missing references or wrong citations. This makes the problem more difficult to handle adequately. Plagiarism detection techniques are applied by making a distinction between natural and programming languages. Our proposed detection process is based on natural language...... document. Our plagiarism detection system, like many Information Retrieval systems, is evaluated with metrics of precision and recall....

  1. A fuzzy clustering algorithm to detect planar and quadric shapes

    Science.gov (United States)

    Krishnapuram, Raghu; Frigui, Hichem; Nasraoui, Olfa

    1992-01-01

    In this paper, we introduce a new fuzzy clustering algorithm to detect an unknown number of planar and quadric shapes in noisy data. The proposed algorithm is computationally and implementationally simple, and it overcomes many of the drawbacks of the existing algorithms that have been proposed for similar tasks. Since the clustering is performed in the original image space, and since no features need to be computed, this approach is particularly suited for sparse data. The algorithm may also be used in pattern recognition applications.

  2. A baseline algorithm for face detection and tracking in video

    Science.gov (United States)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  3. AdaBoost-based algorithm for network intrusion detection.

    Science.gov (United States)

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  4. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    Science.gov (United States)

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. Algorithm for Detecting Significant Locations from Raw GPS Data

    Science.gov (United States)

    Kami, Nobuharu; Enomoto, Nobuyuki; Baba, Teruyuki; Yoshikawa, Takashi

    We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Assuming that a location is significant if users spend a certain time around that area, most current algorithms compare spatial/temporal variables, such as stay duration and a roaming diameter, with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, for N data points, they are generally O(N 2) algorithms since distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality sensitive hashing. Evaluations show competitive performance in detecting significant locations even under high noise levels.

  6. A Motion Detection Algorithm Using Local Phase Information.

    Science.gov (United States)

    Lazar, Aurel A; Ukani, Nikul H; Zhou, Yiyin

    2016-01-01

    Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm.

  7. Global Algorithm Applied on Single Photon Detection

    Science.gov (United States)

    LIU, Hua; DING, Quanxin; Wang, Helong; Chen, Hongliang; GUO, Chunjie; ZHOU, Liwei

    2017-06-01

    There are three major contributions. Firstly, applied study on the theory and experiment of single photon detection, including the project and experiment of quantum key distribution. Secondly, based on methods of the selection of detector, main photo electronic system configuration, design and creation, along with the relationship between these to system characteristics have been studied. Thirdly, based on the considerations on the research of image sensor systems on single photon detection, and the total system characteristics are evaluated and discussed in quantity. The results of simulation experiments and theory analyzing demonstrate that proposed method could advance the system validity effectively, and theory analysis and experiment shows the method is reasonable and efficient.

  8. Study on the Method of Face Detection Based on Chaos Genetic Algorithm Optimization AdaBoost Algorithm

    Directory of Open Access Journals (Sweden)

    Chai Yu

    2017-01-01

    Full Text Available Aiming at the problems that the traditional AdaBoost algorithm has complex feature computation, long training time and low detection rate, a method of face detection based on chaos genetic algorithm optimization adaBoost algorithm was proposed. Firstly, this algorithm uses the image color segmentation for coarse screening on the face image, in order to determine the human skin area. Secondly, the adaptive median filtering is applied to denoise the face image to improve the quality of the face image. Finally, the chaotic genetic algorithm is used to optimize the AdaBoost algorithm to achieve higher detection rate and detection speed. Compared with the traditional AdaBoost algorithm, the experimental results showed that the face detection method based on chaos genetic algorithm optimization AdaBoost algorithm proposed in this paper has a significant improvement in detection rate and detection speed.

  9. Detecting Community Structure by Using a Constrained Label Propagation Algorithm.

    Directory of Open Access Journals (Sweden)

    Jia Hou Chin

    Full Text Available Community structure is considered one of the most interesting features in complex networks. Many real-world complex systems exhibit community structure, where individuals with similar properties form a community. The identification of communities in a network is important for understanding the structure of said network, in a specific perspective. Thus, community detection in complex networks gained immense interest over the last decade. A lot of community detection methods were proposed, and one of them is the label propagation algorithm (LPA. The simplicity and time efficiency of the LPA make it a popular community detection method. However, the LPA suffers from instability detection due to randomness that is induced in the algorithm. The focus of this paper is to improve the stability and accuracy of the LPA, while retaining its simplicity. Our proposed algorithm will first detect the main communities in a network by using the number of mutual neighbouring nodes. Subsequently, nodes are added into communities by using a constrained LPA. Those constraints are then gradually relaxed until all nodes are assigned into groups. In order to refine the quality of the detected communities, nodes in communities can be switched to another community or removed from their current communities at various stages of the algorithm. We evaluated our algorithm on three types of benchmark networks, namely the Lancichinetti-Fortunato-Radicchi (LFR, Relaxed Caveman (RC and Girvan-Newman (GN benchmarks. We also apply the present algorithm to some real-world networks of various sizes. The current results show some promising potential, of the proposed algorithm, in terms of detecting communities accurately. Furthermore, our constrained LPA has a robustness and stability that are significantly better than the simple LPA as it is able to yield deterministic results.

  10. Wideband Array Signal Detection Algorithm Based on Power Focusing

    Directory of Open Access Journals (Sweden)

    Gong Bin

    2012-09-01

    Full Text Available Aiming at the requirement of real-time signal detection in the passive surveillance system, a wideband array signal detection algorithm is proposed based on the concept of power focusing. By making use of the phase difference of the signal received by a uniform linear array, the algorithm makes the power of the received signal focused in the Direction Of Arrival (DOA with improved cascade FFT. Subsequently, the probability density function of the output noise at each angle is derived. Furthermore, a Constant False Alarm Rate (CFAR test statistic and the corresponding detection threshold are constructed. The theoretical probability of detection is also derived for different false alarm rate and Signal-to-Noise Ratio (SNR. The proposed algorithm is computationally efficient, and the detection process is independent of the prior information. Meanwhile, the results can act as the initial value for other algorithms with higher precision. Simulation results show that the proposed algorithm achieves good performance for weak signal detection.

  11. The algorithm of malicious code detection based on data mining

    Science.gov (United States)

    Yang, Yubo; Zhao, Yang; Liu, Xiabi

    2017-08-01

    Traditional technology of malicious code detection has low accuracy and it has insufficient detection capability for new variants. In terms of malicious code detection technology which is based on the data mining, its indicators are not accurate enough, and its classification detection efficiency is relatively low. This paper proposed the information gain ratio indicator based on the N-gram to choose signature, this indicator can accurately reflect the detection weight of the signature, and helped by C4.5 decision tree to elevate the algorithm of classification detection.

  12. Implementation of anomaly detection algorithms for detecting transmission control protocol synchronized flooding attacks

    CSIR Research Space (South Africa)

    Mkuzangwe, NNP

    2015-08-01

    Full Text Available . Furthermore, we fused the outcomes of the two algorithms using the logic OR operator at different thresholds of the two algorithms to obtain improved detection accuracy. Indeed, the results indicated that the OR operator performs better than the two algorithms...

  13. An Algorithm for Pedestrian Detection in Multispectral Image Sequences

    Science.gov (United States)

    Kniaz, V. V.; Fedorenko, V. V.

    2017-05-01

    The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.

  14. Research on a detection algorithm for abnormal state of load

    Directory of Open Access Journals (Sweden)

    Yu Dan

    2017-01-01

    Full Text Available This paper describes the development status and main problems of load monitoring, introduces the key technologies of load monitoring, and by using a load state monitoring system, emphatically illustrates a detection algorithm for abnormal state of the secondary load of the current transformer on the three-phase line of the power grid. The algorithm mainly achieves the real-time detection of abnormal state such as disconnection, short connection and series connected semiconductor. In the light of this algorithm, the working principle is explained, the model formula is worked out, and the state criterion is given. The load condition monitoring system is debugged, put into operation and tested in the pilot operation, and the results show that the algorithm has a good effect.

  15. A Modularity Degree Based Heuristic Community Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Dongming Chen

    2014-01-01

    Full Text Available A community in a complex network can be seen as a subgroup of nodes that are densely connected. Discovery of community structures is a basic problem of research and can be used in various areas, such as biology, computer science, and sociology. Existing community detection methods usually try to expand or collapse the nodes partitions in order to optimize a given quality function. These optimization function based methods share the same drawback of inefficiency. Here we propose a heuristic algorithm (MDBH algorithm based on network structure which employs modularity degree as a measure function. Experiments on both synthetic benchmarks and real-world networks show that our algorithm gives competitive accuracy with previous modularity optimization methods, even though it has less computational complexity. Furthermore, due to the use of modularity degree, our algorithm naturally improves the resolution limit in community detection.

  16. Algorithms for airborne Doppler radar wind shear detection

    Science.gov (United States)

    Gillberg, Jeff; Pockrandt, Mitch; Symosek, Peter; Benser, Earl T.

    1992-01-01

    Honeywell has developed algorithms for the detection of wind shear/microburst using airborne Doppler radar. The Honeywell algorithms use three dimensional pattern recognition techniques and the selection of an associated scanning pattern forward of the aircraft. This 'volumetric scan' approach acquires reflectivity, velocity, and spectral width from a three dimensional volume as opposed to the conventional use of a two dimensional azimuthal slice of data at a fixed elevation. The algorithm approach is based on detection and classification of velocity patterns which are indicative of microburst phenomenon while minimizing the false alarms due to ground clutter return. Simulation studies of microburst phenomenon and x-band radar interaction with the microburst have been performed and results of that study are presented. Algorithm performance indetection of both 'wet' and 'dry' microbursts is presented.

  17. A new edge detection algorithm based on Canny idea

    Science.gov (United States)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  18. Adaptive clustering algorithm for community detection in complex networks

    Science.gov (United States)

    Ye, Zhenqing; Hu, Songnian; Yu, Jun

    2008-10-01

    Community structure is common in various real-world networks; methods or algorithms for detecting such communities in complex networks have attracted great attention in recent years. We introduced a different adaptive clustering algorithm capable of extracting modules from complex networks with considerable accuracy and robustness. In this approach, each node in a network acts as an autonomous agent demonstrating flocking behavior where vertices always travel toward their preferable neighboring groups. An optimal modular structure can emerge from a collection of these active nodes during a self-organization process where vertices constantly regroup. In addition, we show that our algorithm appears advantageous over other competing methods (e.g., the Newman-fast algorithm) through intensive evaluation. The applications in three real-world networks demonstrate the superiority of our algorithm to find communities that are parallel with the appropriate organization in reality.

  19. Algorithm of detecting structural variations in DNA sequences

    Science.gov (United States)

    Nałecz-Charkiewicz, Katarzyna; Nowak, Robert

    2014-11-01

    Whole genome sequencing enables to use the longest common subsequence algorithm to detect genetic structure variations. We propose to search position of short unique fragments, genetic markers, to achieve acceptable time and space complexity. The markers are generated by algorithms searching the genetic sequence or its Fourier transformation. The presented methods are checked on structural variations generated in silico on bacterial genomes giving the comparable or better results than other solutions.

  20. Junction Point Detection Algorithm for SAR Image

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2013-01-01

    Full Text Available In this paper, we propose a novel junction point detector based on an azimuth consensus for remote sensing images. To eliminate the impact of noise and some noncorrelated edges of SAR image, an azimuth consensus constraint is developed. In addition to detecting the locations of junctions at the subpixel level, this operator recognizes their structures as well. A new formula that includes a minimization criterion for the total weighted distance is proposed to compute the locations of junction points accurately. Compared with other well-known detectors, including Forstner, JUDOCA, and CPDA, the experimental results indicate that our operator outperforms them both in location accuracy of junction points and in angle accuracy of branch edges. Moreover, our method possesses satisfying robustness to the impact of noise and changes of the SAR images. Our operator can be potentially used to solve a number of problems in computer vision, such as SAR image registration, wide-baseline matching, and UAV navigation system.

  1. Multifeature Fusion Vehicle Detection Algorithm Based on Choquet Integral

    Directory of Open Access Journals (Sweden)

    Wenhui Li

    2014-01-01

    Full Text Available Vision-based multivehicle detection plays an important role in Forward Collision Warning Systems (FCWS and Blind Spot Detection Systems (BSDS. The performance of these systems depends on the real-time capability, accuracy, and robustness of vehicle detection methods. To improve the accuracy of vehicle detection algorithm, we propose a multifeature fusion vehicle detection algorithm based on Choquet integral. This algorithm divides the vehicle detection problem into two phases: feature similarity measure and multifeature fusion. In the feature similarity measure phase, we first propose a taillight-based vehicle detection method, and then vehicle taillight feature similarity measure is defined. Second, combining with the definition of Choquet integral, the vehicle symmetry similarity measure and the HOG + AdaBoost feature similarity measure are defined. Finally, these three features are fused together by Choquet integral. Being evaluated on public test collections and our own test images, the experimental results show that our method has achieved effective and robust multivehicle detection in complicated environments. Our method can not only improve the detection rate but also reduce the false alarm rate, which meets the engineering requirements of Advanced Driving Assistance Systems (ADAS.

  2. Application of the Apriori algorithm for adverse drug reaction detection.

    Science.gov (United States)

    Kuo, M H; Kushniruk, A W; Borycki, E M; Greig, D

    2009-01-01

    The objective of this research is to assess the suitability of the Apriori association analysis algorithm for the detection of adverse drug reactions (ADR) in health care data. The Apriori algorithm is used to perform association analysis on the characteristics of patients, the drugs they are taking, their primary diagnosis, co-morbid conditions, and the ADRs or adverse events (AE) they experience. This analysis produces association rules that indicate what combinations of medications and patient characteristics lead to ADRs. A simple data set is used to demonstrate the feasibility and effectiveness of the algorithm.

  3. A source location algorithm of lightning detection networks in China

    Directory of Open Access Journals (Sweden)

    Z. X. Hu

    2010-10-01

    Full Text Available Fast and accurate retrieval of lightning sources is crucial to the early warning and quick repairs of lightning disaster. An algorithm for computing the location and onset time of cloud-to-ground lightning using the time-of-arrival (TOA and azimuth-of-arrival (AOA data is introduced in this paper. The algorithm can iteratively calculate the least-squares solution of a lightning source on an oblate spheroidal Earth. It contains a set of unique formulas to compute the geodesic distance and azimuth and an explicit method to compute the initial position using TOA data of only three sensors. Since the method accounts for the effects of the oblateness of the Earth, it would provide a more accurate solution than algorithms based on planar or spherical surface models. Numerical simulations are presented to test this algorithm and evaluate the performance of a lightning detection network in the Hubei province of China. Since 1990s, the proposed algorithm has been used in many regional lightning detection networks installed by the electric power system in China. It is expected that the proposed algorithm be used in more lightning detection networks and other location systems.

  4. Community detection algorithm evaluation with ground-truth data

    Science.gov (United States)

    Jebabli, Malek; Cherifi, Hocine; Cherifi, Chantal; Hamouda, Atef

    2018-02-01

    Community structure is of paramount importance for the understanding of complex networks. Consequently, there is a tremendous effort in order to develop efficient community detection algorithms. Unfortunately, the issue of a fair assessment of these algorithms is a thriving open question. If the ground-truth community structure is available, various clustering-based metrics are used in order to compare it versus the one discovered by these algorithms. However, these metrics defined at the node level are fairly insensitive to the variation of the overall community structure. To overcome these limitations, we propose to exploit the topological features of the 'community graphs' (where the nodes are the communities and the links represent their interactions) in order to evaluate the algorithms. To illustrate our methodology, we conduct a comprehensive analysis of overlapping community detection algorithms using a set of real-world networks with known a priori community structure. Results provide a better perception of their relative performance as compared to classical metrics. Moreover, they show that more emphasis should be put on the topology of the community structure. We also investigate the relationship between the topological properties of the community structure and the alternative evaluation measures (quality metrics and clustering metrics). It appears clearly that they present different views of the community structure and that they must be combined in order to evaluate the effectiveness of community detection algorithms.

  5. SIDRA: a blind algorithm for signal detection in photometric surveys

    Science.gov (United States)

    Mislis, D.; Bachelet, E.; Alsubai, K. A.; Bramich, D. M.; Parley, N.

    2016-01-01

    We present the Signal Detection using Random-Forest Algorithm (SIDRA). SIDRA is a detection and classification algorithm based on the Machine Learning technique (Random Forest). The goal of this paper is to show the power of SIDRA for quick and accurate signal detection and classification. We first diagnose the power of the method with simulated light curves and try it on a subset of the Kepler space mission catalogue. We use five classes of simulated light curves (CONSTANT, TRANSIT, VARIABLE, MLENS and EB for constant light curves, transiting exoplanet, variable, microlensing events and eclipsing binaries, respectively) to analyse the power of the method. The algorithm uses four features in order to classify the light curves. The training sample contains 5000 light curves (1000 from each class) and 50 000 random light curves for testing. The total SIDRA success ratio is ≥90 per cent. Furthermore, the success ratio reaches 95-100 per cent for the CONSTANT, VARIABLE, EB and MLENS classes and 92 per cent for the TRANSIT class with a decision probability of 60 per cent. Because the TRANSIT class is the one which fails the most, we run a simultaneous fit using SIDRA and a Box Least Square (BLS)-based algorithm for searching for transiting exoplanets. As a result, our algorithm detects 7.5 per cent more planets than a classic BLS algorithm, with better results for lower signal-to-noise light curves. SIDRA succeeds to catch 98 per cent of the planet candidates in the Kepler sample and fails for 7 per cent of the false alarms subset. SIDRA promises to be useful for developing a detection algorithm and/or classifier for large photometric surveys such as TESS and PLATO exoplanet future space missions.

  6. Eu-Detect: An algorithm for detecting eukaryotic sequences in ...

    Indian Academy of Sciences (India)

    Plots depicting the classification accuracy of Eu-Detect with various combinations of. 'cumulative sequence count' (40K, 50K, 60K, 70K, 80K) and 'coverage threshold' (20%, 30%, 40%, 50%, 60%, 70%,. 80%). While blue bars represent Eu-Detect's average classification accuracy with eukaryotic data sets, red bars represent.

  7. ENHANCED COMPONENT DETECTION ALGORITHM OF FULL-WAVEFORM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    M. Zhou

    2013-05-01

    Full Text Available When full-waveform LiDAR (FW-LiDAR data are applied to extract the component feature information of interest targets, there exist a problem of components lost during the waveform decomposition procedure, which severely constrains the performance of subsequent targets information extraction. Focusing on the problem above, an enhance component detection algorithm, which combines Finite Mixed Method (FMM, Levenberg-Marquardt (LM algorithm and Penalized Minimum Matching Distance (PMMD,is proposed in this paper. All of the algorithms for parameters initialization, waveform decomposition and missing component detection have been improved, which greatly increase the precision of component detection, and guarantee the precision of waveform decomposition that could help the weak information extraction of interest targets. The effectiveness of this method is verified by the experimental results of simulation and measured data.

  8. An Early Fire Detection Algorithm Using IP Cameras

    Directory of Open Access Journals (Sweden)

    Hector Perez-Meana

    2012-05-01

    Full Text Available The presence of smoke is the first symptom of fire; therefore to achieve early fire detection, accurate and quick estimation of the presence of smoke is very important. In this paper we propose an algorithm to detect the presence of smoke using video sequences captured by Internet Protocol (IP cameras, in which important features of smoke, such as color, motion and growth properties are employed. For an efficient smoke detection in the IP camera platform, a detection algorithm must operate directly in the Discrete Cosine Transform (DCT domain to reduce computational cost, avoiding a complete decoding process required for algorithms that operate in spatial domain. In the proposed algorithm the DCT Inter-transformation technique is used to increase the detection accuracy without inverse DCT operation. In the proposed scheme, firstly the candidate smoke regions are estimated using motion and color smoke properties; next using morphological operations the noise is reduced. Finally the growth properties of the candidate smoke regions are furthermore analyzed through time using the connected component labeling technique. Evaluation results show that a feasible smoke detection method with false negative and false positive error rates approximately equal to 4% and 2%, respectively, is obtained.

  9. VIPR: A probabilistic algorithm for analysis of microbial detection microarrays

    Directory of Open Access Journals (Sweden)

    Holbrook Michael R

    2010-07-01

    Full Text Available Abstract Background All infectious disease oriented clinical diagnostic assays in use today focus on detecting the presence of a single, well defined target agent or a set of agents. In recent years, microarray-based diagnostics have been developed that greatly facilitate the highly parallel detection of multiple microbes that may be present in a given clinical specimen. While several algorithms have been described for interpretation of diagnostic microarrays, none of the existing approaches is capable of incorporating training data generated from positive control samples to improve performance. Results To specifically address this issue we have developed a novel interpretive algorithm, VIPR (Viral Identification using a PRobabilistic algorithm, which uses Bayesian inference to capitalize on empirical training data to optimize detection sensitivity. To illustrate this approach, we have focused on the detection of viruses that cause hemorrhagic fever (HF using a custom HF-virus microarray. VIPR was used to analyze 110 empirical microarray hybridizations generated from 33 distinct virus species. An accuracy of 94% was achieved as measured by leave-one-out cross validation. Conclusions VIPR outperformed previously described algorithms for this dataset. The VIPR algorithm has potential to be broadly applicable to clinical diagnostic settings, wherein positive controls are typically readily available for generation of training data.

  10. An Algorithm to Detect the Retinal Region of Interest

    Science.gov (United States)

    Şehirli, E.; Turan, M. K.; Demiral, E.

    2017-11-01

    Retina is one of the important layers of the eyes, which includes sensitive cells to colour and light and nerve fibers. Retina can be displayed by using some medical devices such as fundus camera, ophthalmoscope. Hence, some lesions like microaneurysm, haemorrhage, exudate with many diseases of the eye can be detected by looking at the images taken by devices. In computer vision and biomedical areas, studies to detect lesions of the eyes automatically have been done for a long time. In order to make automated detections, the concept of ROI may be utilized. ROI which stands for region of interest generally serves the purpose of focusing on particular targets. The main concentration of this paper is the algorithm to automatically detect retinal region of interest belonging to different retinal images on a software application. The algorithm consists of three stages such as pre-processing stage, detecting ROI on processed images and overlapping between input image and obtained ROI of the image.

  11. New algorithm for islanding detection of wind turbines driven DFIG

    Energy Technology Data Exchange (ETDEWEB)

    Mirzaei, Jaber [Monenco Consulting Engineering Company, Tehran (Iran, Islamic Republic of). Dept. of Wind Energy; Kargar, H. Kazemi [Shahid Beheshti Univ., Tehran (Iran, Islamic Republic of). Dept. of Electrical and Computer Engineering

    2011-07-01

    This paper proposes a new algorithm for islanding condition of working in various wind farm loading conditions which is a combination of both passive and active methods for islanding detection of wind farm. The passive part detects probability of islanding conditions and the active part examines and verifies this probable conditions. The buses voltage, first and second order frequency derivation are the parameters which are used in passive part of algorithm and capacitance variations of capacitor banks are employed in active part. This new method was validated by simulation results of Manjil wind farm located in Manjil, Iran. (orig.)

  12. Antivibration pipeline-filtering algorithm for maritime small target detection

    Science.gov (United States)

    Wang, Bin; Xu, Wenhai; Zhao, Ming; Wu, Houde

    2014-11-01

    When searching for small targets at sea based on an infrared imaging system, irregular and random vibration of the airborne imaging platform causes intense interference for the pipeline-filtering, which is an algorithm that performs well in detecting small targets but is particularly sensitive to interframe vibrations of sequence images. This paper puts forward a pipeline-filtering algorithm that has a good performance on self-adaptive antivibration. In the block matching method that combines the normalized cross-correlations coefficient with the normalized mutual information, the interframe vibration of sequence images is acquired in real time and used to correct coordinates of the single-frame detection results, and then the corrected detection results are used to complete the pipeline-filtering. In addition, under severe sea conditions, small targets at sea may disappear transiently, leading to missing detection. This algorithm is also able to resolve this problem. Experimental results show that the algorithm can overcome the problem of interframe vibration of sequence images, thus realizing accurate detection of small maritime targets.

  13. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach

    Science.gov (United States)

    Huang, Adam; Li, Jiang; Summers, Ronald M.; Petrick, Nicholas; Hara, Amy K.

    2010-01-01

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (palgorithm than by the one-step for 63% of all possible operating points. While operating at a suitable sensitivity level such as 90.8% (79/87) or 88.5% (77/87), the false positive rate was reduced by 24.4% (95% confidence intervals 17.9–31.0%) or 45.8% (95% confidence intervals 40.1–51.0%) respectively. We demonstrated that, with a proper experimental design, the Pareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms. PMID:20548966

  14. Genome-wide algorithm for detecting CNV associations with diseases

    Directory of Open Access Journals (Sweden)

    Peng Bo

    2011-08-01

    Full Text Available Abstract Background SNP genotyping arrays have been developed to characterize single-nucleotide polymorphisms (SNPs and DNA copy number variations (CNVs. Nonparametric and model-based statistical algorithms have been developed to detect CNVs from SNP data using the marker intensities. However, these algorithms lack specificity to detect small CNVs owing to the high false positive rate when calling CNVs based on the intensity values. Therefore, the resulting association tests lack power even if the CNVs affecting disease risk are common. An alternative procedure called PennCNV uses information from both the marker intensities as well as the genotypes and therefore has increased sensitivity. Results By using the hidden Markov model (HMM implemented in PennCNV to derive the probabilities of different copy number states which we subsequently used in a logistic regression model, we developed a new genome-wide algorithm to detect CNV associations with diseases. We compared this new method with association test applied to the most probable copy number state for each individual that is provided by PennCNV after it performs an initial HMM analysis followed by application of the Viterbi algorithm, which removes information about copy number probabilities. In one of our simulation studies, we showed that for large CNVs (number of SNPs ≥ 10, the association tests based on PennCNV calls gave more significant results, but the new algorithm retained high power. For small CNVs (number of SNPs 10, the logistic algorithm provided smaller average p-values (e.g., p = 7.54e - 17 when relative risk RR = 3.0 in all the scenarios and could capture signals that PennCNV did not (e.g., p = 0.020 when RR = 3.0. From a second set of simulations, we showed that the new algorithm is more powerful in detecting disease associations with small CNVs (number of SNPs ranging from 3 to 5 under different penetrance models (e.g., when RR = 3.0, for relatively weak signals, power = 0

  15. Comparison Between Four Detection Algorithms for GEO Objects

    Science.gov (United States)

    Yanagisawa, T.; Uetsuhara, M.; Banno, H.; Kurosaki, H.; Kinoshita, D.; Kitazawa, Y.; Hanada, T.

    2012-09-01

    Four detection algorithms for GEO objects are being developed under the collaboration between Kyushu University, IHI corporation and JAXA. Each algorithm is designed to process CCD images to detect GEO objects. First one is PC based stacking method which has been developed in JAXA since 2000. Numerous CCD images are used to detect faint GEO objects below the limiting magnitude of a single CCD image. Sub-images are cropped from many CCD image to fit the movement of the objects. A median image of all the sub-images is then created. Although this method has an ability to detect faint objects, it takes time to analyze. Second one is the line-identifying technique which also uses many CCD frames and finds any series of objects that are arrayed on a straight line from the first frame to the last frame. This can analyze data faster than the stacking method, but cannot detect faint objects as the stacking method. Third one is the robust stacking method developed by IHI corporation which uses average instead of median to reduce analysis time. This has same analysis speed as the line-identifying technique and better detection capabilities in terms of the darkness. Forth one is the FPGA based stacking method which uses binalized images and a new algorithm installed in a FPGA board which reduce analysis time about one thousandth. All four algorithms analyzed the same sets of data to evaluate their advantages and disadvantages. By comparing their analysis times and results, an optimal usage of these algorithms are considered.

  16. Algorithms of Crescent Structure Detection in Human Biological Fluid Facies

    Science.gov (United States)

    Krasheninnikov, V. R.; Malenova, O. E.; Yashina, A. S.

    2017-05-01

    One of the effective methods of early medical diagnosis is based on the image analysis of human biological fluids. In the process of fluid crystallization there appear characteristic patterns (markers) in the resulting layer (facies). Each marker is a highly probable sign of some pathology even at an early stage of a disease development. When mass health examination is carried out, it is necessary to analyze a large number of images. That is why, the problem of algorithm and software development for automated processing of images is rather urgent nowadays. This paper presents algorithms to detect a crescent structures in images of blood serum and cervical mucus facies. Such a marker indicates the symptoms of ischemic disease. The algorithm presented detects this marker with high probability when the probability of false alarm is low.

  17. A Supervised Classification Algorithm for Note Onset Detection

    Directory of Open Access Journals (Sweden)

    Douglas Eck

    2007-01-01

    Full Text Available This paper presents a novel approach to detecting onsets in music audio files. We use a supervised learning algorithm to classify spectrogram frames extracted from digital audio as being onsets or nononsets. Frames classified as onsets are then treated with a simple peak-picking algorithm based on a moving average. We present two versions of this approach. The first version uses a single neural network classifier. The second version combines the predictions of several networks trained using different hyperparameters. We describe the details of the algorithm and summarize the performance of both variants on several datasets. We also examine our choice of hyperparameters by describing results of cross-validation experiments done on a custom dataset. We conclude that a supervised learning approach to note onset detection performs well and warrants further investigation.

  18. Assessment of a novel mass detection algorithm in mammograms

    Directory of Open Access Journals (Sweden)

    Ehsan Kozegar

    2013-01-01

    Settings and Design: The proposed mass detector consists of two major steps. In the first step, several suspicious regions are extracted from the mammograms using an adaptive thresholding technique. In the second step, false positives originating by the previous stage are reduced by a machine learning approach. Materials and Methods: All modules of the mass detector were assessed on mini-MIAS database. In addition, the algorithm was tested on INBreast database for more validation. Results: According to FROC analysis, our mass detection algorithm outperforms other competing methods. Conclusions: We should not just insist on sensitivity in the segmentation phase because if we forgot FP rate, and our goal was just higher sensitivity, then the learning algorithm would be biased more toward false positives and the sensitivity would decrease dramatically in the false positive reduction phase. Therefore, we should consider the mass detection problem as a cost sensitive problem because misclassification costs are not the same in this type of problems.

  19. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  20. A Monte Carlo Evaluation of Weighted Community Detection Algorithms.

    Science.gov (United States)

    Gates, Kathleen M; Henry, Teague; Steinley, Doug; Fair, Damien A

    2016-01-01

    The past decade has been marked with a proliferation of community detection algorithms that aim to organize nodes (e.g., individuals, brain regions, variables) into modular structures that indicate subgroups, clusters, or communities. Motivated by the emergence of big data across many fields of inquiry, these methodological developments have primarily focused on the detection of communities of nodes from matrices that are very large. However, it remains unknown if the algorithms can reliably detect communities in smaller graph sizes (i.e., 1000 nodes and fewer) which are commonly used in brain research. More importantly, these algorithms have predominantly been tested only on binary or sparse count matrices and it remains unclear the degree to which the algorithms can recover community structure for different types of matrices, such as the often used cross-correlation matrices representing functional connectivity across predefined brain regions. Of the publicly available approaches for weighted graphs that can detect communities in graph sizes of at least 1000, prior research has demonstrated that Newman's spectral approach (i.e., Leading Eigenvalue), Walktrap, Fast Modularity, the Louvain method (i.e., multilevel community method), Label Propagation, and Infomap all recover communities exceptionally well in certain circumstances. The purpose of the present Monte Carlo simulation study is to test these methods across a large number of conditions, including varied graph sizes and types of matrix (sparse count, correlation, and reflected Euclidean distance), to identify which algorithm is optimal for specific types of data matrices. The results indicate that when the data are in the form of sparse count networks (such as those seen in diffusion tensor imaging), Label Propagation and Walktrap surfaced as the most reliable methods for community detection. For dense, weighted networks such as correlation matrices capturing functional connectivity, Walktrap consistently

  1. A Monte Carlo Evaluation of Weighted Community Detection Algorithms

    Directory of Open Access Journals (Sweden)

    Kathleen Gates

    2016-11-01

    Full Text Available The past decade has been marked with a proliferation of community detection algorithms that aim to organize nodes (e.g., individuals, brain regions, variables into modular structures that indicate subgroups, clusters, or communities. Motivated by the emergence of big data across many fields of inquiry, these methodological developments have primarily focused on the detection of communities of nodes from matrices that are very large. However, it remains unknown if the algorithms can reliably detect communities in smaller graph sizes (i.e., 1000 nodes and fewer which are commonly used in brain research. More importantly, these algorithms have predominantly been tested only on binary or sparse count matrices and it remains unclear the degree to which the algorithms can recover community structure for different types of matrices, such as the often used cross-correlation matrices representing functional connectivity across predefined brain regions. Of the publicly available approaches for weighted graphs that can detect communities in graphs sizes of at least 1000, prior research has demonstrated that Newman’s spectral approach (i.e., Leading Eigenvalue, Walktrap, Fast Modularity, the Louvain method (i.e., multilevel community method, Label Propagation, and Infomap all recover communities exceptionally well in certain circumstances. The purpose of the present Monte Carlo simulation study is to test these methods across a large number of conditions, including varied graph sizes and types of matrix (sparse count, correlation, and reflected Euclidean distance, to identify which algorithm is optimal for specific types of data matrices. The results indicate that when the data are in the form of sparse count networks (such as those seen in diffusion tensor imaging, Label Propagation and Walktrap surfaced as the most reliable methods for community detection. For dense, weighted networks such as correlation matrices capturing functional connectivity

  2. A Monte Carlo Evaluation of Weighted Community Detection Algorithms

    Science.gov (United States)

    Gates, Kathleen M.; Henry, Teague; Steinley, Doug; Fair, Damien A.

    2016-01-01

    The past decade has been marked with a proliferation of community detection algorithms that aim to organize nodes (e.g., individuals, brain regions, variables) into modular structures that indicate subgroups, clusters, or communities. Motivated by the emergence of big data across many fields of inquiry, these methodological developments have primarily focused on the detection of communities of nodes from matrices that are very large. However, it remains unknown if the algorithms can reliably detect communities in smaller graph sizes (i.e., 1000 nodes and fewer) which are commonly used in brain research. More importantly, these algorithms have predominantly been tested only on binary or sparse count matrices and it remains unclear the degree to which the algorithms can recover community structure for different types of matrices, such as the often used cross-correlation matrices representing functional connectivity across predefined brain regions. Of the publicly available approaches for weighted graphs that can detect communities in graph sizes of at least 1000, prior research has demonstrated that Newman's spectral approach (i.e., Leading Eigenvalue), Walktrap, Fast Modularity, the Louvain method (i.e., multilevel community method), Label Propagation, and Infomap all recover communities exceptionally well in certain circumstances. The purpose of the present Monte Carlo simulation study is to test these methods across a large number of conditions, including varied graph sizes and types of matrix (sparse count, correlation, and reflected Euclidean distance), to identify which algorithm is optimal for specific types of data matrices. The results indicate that when the data are in the form of sparse count networks (such as those seen in diffusion tensor imaging), Label Propagation and Walktrap surfaced as the most reliable methods for community detection. For dense, weighted networks such as correlation matrices capturing functional connectivity, Walktrap consistently

  3. Swarm, genetic and evolutionary programming algorithms applied to multiuser detection

    Directory of Open Access Journals (Sweden)

    Paul Jean Etienne Jeszensky

    2005-02-01

    Full Text Available In this paper, the particles swarm optimization technique, recently published in the literature, and applied to Direct Sequence/Code Division Multiple Access systems (DS/CDMA with multiuser detection (MuD is analyzed, evaluated and compared. The Swarm algorithm efficiency when applied to the DS-CDMA multiuser detection (Swarm-MuD is compared through the tradeoff performance versus computational complexity, being the complexity expressed in terms of the number of necessary operations in order to reach the performance obtained through the optimum detector or the Maximum Likelihood detector (ML. The comparison is accomplished among the genetic algorithm, evolutionary programming with cloning and Swarm algorithm under the same simulation basis. Additionally, it is proposed an heuristics-MuD complexity analysis through the number of computational operations. Finally, an analysis is carried out for the input parameters of the Swarm algorithm in the attempt to find the optimum parameters (or almost-optimum for the algorithm applied to the MuD problem.

  4. An artificial intelligent algorithm for tumor detection in screening mammogram.

    Science.gov (United States)

    Zheng, L; Chan, A K

    2001-07-01

    Cancerous tumor mass is one of the major types of breast cancer. When cancerous masses are embedded in and camouflaged by varying densities of parenchymal tissue structures, they are very difficult to be visually detected on mammograms. This paper presents an algorithm that combines several artificial intelligent techniques with the discrete wavelet transform (DWT) for detection of masses in mammograms. The AI techniques include fractal dimension analysis, multiresolution markov random field, dogs-and-rabbits algorithm, and others. The fractal dimension analysis serves as a preprocessor to determine the approximate locations of the regions suspicious for cancer in the mammogram. The dogs-and-rabbits clustering algorithm is used to initiate the segmentation at the LL subband of a three-level DWT decomposition of the mammogram. A tree-type classification strategy is applied at the end to determine whether a given region is suspicious for cancer. We have verified the algorithm with 322 mammograms in the Mammographic Image Analysis Society Database. The verification results show that the proposed algorithm has a sensitivity of 97.3% and the number of false positive per image is 3.92.

  5. Detecting microsatellites within genomes: significant variation among algorithms

    Directory of Open Access Journals (Sweden)

    Rivals Eric

    2007-04-01

    Full Text Available Abstract Background Microsatellites are short, tandemly-repeated DNA sequences which are widely distributed among genomes. Their structure, role and evolution can be analyzed based on exhaustive extraction from sequenced genomes. Several dedicated algorithms have been developed for this purpose. Here, we compared the detection efficiency of five of them (TRF, Mreps, Sputnik, STAR, and RepeatMasker. Results Our analysis was first conducted on the human X chromosome, and microsatellite distributions were characterized by microsatellite number, length, and divergence from a pure motif. The algorithms work with user-defined parameters, and we demonstrate that the parameter values chosen can strongly influence microsatellite distributions. The five algorithms were then compared by fixing parameters settings, and the analysis was extended to three other genomes (Saccharomyces cerevisiae, Neurospora crassa and Drosophila melanogaster spanning a wide range of size and structure. Significant differences for all characteristics of microsatellites were observed among algorithms, but not among genomes, for both perfect and imperfect microsatellites. Striking differences were detected for short microsatellites (below 20 bp, regardless of motif. Conclusion Since the algorithm used strongly influences empirical distributions, studies analyzing microsatellite evolution based on a comparison between empirical and theoretical size distributions should therefore be considered with caution. We also discuss why a typological definition of microsatellites limits our capacity to capture their genomic distributions.

  6. A Universal High-Performance Correlation Analysis Detection Model and Algorithm for Network Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Hongliang Zhu

    2017-01-01

    Full Text Available In big data era, the single detection techniques have already not met the demand of complex network attacks and advanced persistent threats, but there is no uniform standard to make different correlation analysis detection be performed efficiently and accurately. In this paper, we put forward a universal correlation analysis detection model and algorithm by introducing state transition diagram. Based on analyzing and comparing the current correlation detection modes, we formalize the correlation patterns and propose a framework according to data packet timing and behavior qualities and then design a new universal algorithm to implement the method. Finally, experiment, which sets up a lightweight intrusion detection system using KDD1999 dataset, shows that the correlation detection model and algorithm can improve the performance and guarantee high detection rates.

  7. Advanced defect detection algorithm using clustering in ultrasonic NDE

    Science.gov (United States)

    Gongzhang, Rui; Gachagan, Anthony

    2016-02-01

    A range of materials used in industry exhibit scattering properties which limits ultrasonic NDE. Many algorithms have been proposed to enhance defect detection ability, such as the well-known Split Spectrum Processing (SSP) technique. Scattering noise usually cannot be fully removed and the remaining noise can be easily confused with real feature signals, hence becoming artefacts during the image interpretation stage. This paper presents an advanced algorithm to further reduce the influence of artefacts remaining in A-scan data after processing using a conventional defect detection algorithm. The raw A-scan data can be acquired from either traditional single transducer or phased array configurations. The proposed algorithm uses the concept of unsupervised machine learning to cluster segmental defect signals from pre-processed A-scans into different classes. The distinction and similarity between each class and the ensemble of randomly selected noise segments can be observed by applying a classification algorithm. Each class will then be labelled as `legitimate reflector' or `artefacts' based on this observation and the expected probability of defection (PoD) and probability of false alarm (PFA) determined. To facilitate data collection and validate the proposed algorithm, a 5MHz linear array transducer is used to collect A-scans from both austenitic steel and Inconel samples. Each pulse-echo A-scan is pre-processed using SSP and the subsequent application of the proposed clustering algorithm has provided an additional reduction to PFA while maintaining PoD for both samples compared with SSP results alone.

  8. DDoS Attack Detection Algorithms Based on Entropy Computing

    Science.gov (United States)

    Li, Liying; Zhou, Jianying; Xiao, Ning

    Distributed Denial of Service (DDoS) attack poses a severe threat to the Internet. It is difficult to find the exact signature of attacking. Moreover, it is hard to distinguish the difference of an unusual high volume of traffic which is caused by the attack or occurs when a huge number of users occasionally access the target machine at the same time. The entropy detection method is an effective method to detect the DDoS attack. It is mainly used to calculate the distribution randomness of some attributes in the network packets' headers. In this paper, we focus on the detection technology of DDoS attack. We improve the previous entropy detection algorithm, and propose two enhanced detection methods based on cumulative entropy and time, respectively. Experiment results show that these methods could lead to more accurate and effective DDoS detection.

  9. Fall detection using supervised machine learning algorithms: A comparative study

    KAUST Repository

    Zerrouki, Nabil

    2017-01-05

    Fall incidents are considered as the leading cause of disability and even mortality among older adults. To address this problem, fall detection and prevention fields receive a lot of intention over the past years and attracted many researcher efforts. We present in the current study an overall performance comparison between fall detection systems using the most popular machine learning approaches which are: Naïve Bayes, K nearest neighbor, neural network, and support vector machine. The analysis of the classification power associated to these most widely utilized algorithms is conducted on two fall detection databases namely FDD and URFD. Since the performance of the classification algorithm is inherently dependent on the features, we extracted and used the same features for all classifiers. The classification evaluation is conducted using different state of the art statistical measures such as the overall accuracy, the F-measure coefficient, and the area under ROC curve (AUC) value.

  10. BOUNDARY DETECTION ALGORITHMS IN WIRELESS SENSOR NETWORKS: A SURVEY

    Directory of Open Access Journals (Sweden)

    Lanny Sitanayah

    2009-01-01

    Full Text Available Wireless sensor networks (WSNs comprise a large number of sensor nodes, which are spread out within a region and communicate using wireless links. In some WSN applications, recognizing boundary nodes is important for topology discovery, geographic routing and tracking. In this paper, we study the problem of recognizing the boundary nodes of a WSN. We firstly identify the factors that influence the design of algorithms for boundary detection. Then, we classify the existing work in boundary detection, which is vital for target tracking to detect when the targets enter or leave the sensor field.

  11. Rapid and Reliable Diagnostic Algorithm for Detection of Clostridium difficile▿

    Science.gov (United States)

    Fenner, Lukas; Widmer, Andreas F.; Goy, Gisela; Rudin, Sonja; Frei, Reno

    2008-01-01

    We evaluated a two-step algorithm for detection of Clostridium difficile in 1,468 stool specimens. First, specimens were screened by an immunoassay for C. difficile glutamate dehydrogenase antigen (C.DIFF CHEK-60). Second, screen-positive specimens underwent toxin testing by a rapid toxin A/B assay (TOX A/B QUIK CHEK); toxin-negative specimens were subjected to stool culture. This algorithm allowed final results for 92% of specimens with a turnaround time of 4 h. PMID:18032627

  12. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    Science.gov (United States)

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  13. A simple algorithm for detecting circular permutations in proteins.

    Science.gov (United States)

    Uliel, S; Fliess, A; Amir, A; Unger, R

    1999-11-01

    Circular permutation of a protein is a genetic operation in which part of the C-terminal of the protein is moved to its N-terminal. Recently, it has been shown that proteins that undergo engineered circular permutations generally maintain their three dimensional structure and biological function. This observation raises the possibility that circular permutation has occurred in Nature during evolution. In this scenario a protein underwent circular permutation into another protein, thereafter both proteins further diverged by standard genetic operations. To study this possibility one needs an efficient algorithm that for a given pair of proteins can detect the underlying event of circular permutations. A possible formal description of the question is: given two sequences, find a circular permutation of one of them under which the edit distance between the proteins is minimal. A naive algorithm might take time proportional to N3 or even N4, which is prohibitively slow for a large-scale survey. A sophisticated algorithm that runs in asymptotic time of N2 was recently suggested, but it is not practical for a large-scale survey. A simple and efficient algorithm that runs in time N2 is presented. The algorithm is based on duplicating one of the two sequences, and then performing a modified version of the standard dynamic programming algorithm. While the algorithm is not guaranteed to find the optimal results, we present data that indicate that in practice the algorithm performs very well. A Fortran program that calculates the optimal edit distance under circular permutation is available upon request from the authors. ron@biocom1.ls.biu.ac.il.

  14. Multi-objective community detection based on memetic algorithm.

    Directory of Open Access Journals (Sweden)

    Peng Wu

    Full Text Available Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.

  15. Evaluation of hybrids algorithms for mass detection in digitalized mammograms

    Energy Technology Data Exchange (ETDEWEB)

    Cordero, Jose; Garzon Reyes, Johnson, E-mail: josecorderog@hotmail.com [Grupo de Optica y Espectroscopia GOE, Centro de Ciencia Basica, Universidad Pontifica Bolivariana de Medellin (Colombia)

    2011-01-01

    The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.

  16. aTrunk—An ALS-Based Trunk Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Sebastian Lamprecht

    2015-08-01

    Full Text Available This paper presents a rapid multi-return ALS-based (Airborne Laser Scanning tree trunk detection approach. The multi-core Divide & Conquer algorithm uses a CBH (Crown Base Height estimation and 3D-clustering approach to isolate points associated with single trunks. For each trunk, a principal-component-based linear model is fitted, while a deterministic modification of LO-RANSAC is used to identify an optimal model. The algorithm returns a vector-based model for each identified trunk while parameters like the ground position, zenith orientation, azimuth orientation and length of the trunk are provided. The algorithm performed well for a study area of 109 trees (about 2/3 Norway Spruce and 1/3 European Beech, with a point density of 7.6 points per m2, while a detection rate of about 75% and an overall accuracy of 84% were reached. Compared to crown-based tree detection methods, the aTrunk approach has the advantages of a high reliability (5% commission error and its high tree positioning accuracy (0.59m average difference and 0.78m RMSE. The usage of overlapping segments with parametrizable size allows a seamless detection of the tree trunks.

  17. Automatic Arrhythmia Beat Detection: Algorithm, System, and Implementation

    Directory of Open Access Journals (Sweden)

    Wisnu Jatmiko

    2016-08-01

    Full Text Available Cardiac disease is one of the major causes of death in the world. Early diagnose of the symptoms depends on abnormality on heart beat pattern, known as Arrhythmia. A novel fuzzy neuro generalized learning vector quantization for automatic Arrhythmia heart beat classification is proposed. The algorithm is an extension from theGLVQ algorithm that employs a fuzzy logic concept as the discriminant function in order to develop a robust algorithmand improve the classification performance. The algorithm is testedagainst MIT-BIH arrhythmia database to measure theperformance. Based on the experiment result, FN-GLVQ is able to increase the accuracy of GLVQ by a soft margin. As we intend to build a device with automated Arrhythmia detection,FN-GLVQ is then implemented into Field Gate Programmable Array to prototype the system into a real device.

  18. Static and Dynamic Pedestrian Detection Algorithm for Visual Based Driver Assistive System

    OpenAIRE

    Bush Idoko John; Dimililer Kamil

    2017-01-01

    This paper presents a new pedestrian detection algorithm used in Advanced Driver-Assistance System with only one camera aiming to improving traffic safety. The new pedestrian detection algorithm differs from traditional pedestrian detection algorithm, which only focuses on pedestrian detection rate or pedestrian detection accuracy. Conversely, the proposed algorithm focuses on both the accuracy and the rate. Some new features are proposed to improve pedestrian detection rate of the system. Al...

  19. Clinical results of an advanced SVT detection enhancement algorithm.

    Science.gov (United States)

    Lee, Michael A; Corbisiero, Raffaele; Nabert, David R; Coman, James A; Giudici, Michael C; Tomassoni, Gery F; Turk, Kyong T; Breiter, David J; Zhang, Yunlong

    2005-10-01

    Supraventricular tachycardia (SVT) has many characteristics that are similar to ventricular tachycardia (VT). This presents a significant challenge for the SVT-detection algorithms of an implantable cardioverter defibrillator (ICD). A newly developed ICD, which utilizes a Vector Timing and Correlation algorithm as well as interval-based conventional SVT discrimination algorithms (Rhythm ID), was evaluated in this study. This study was a prospective, multicenter trial that evaluated 96 patients implanted with an ICD at 21 U.S. centers. All patients were followed at 2 weeks, 1 month, and every 3 months post implant. A manual Rhythm ID reference vector was acquired prior to any arrhythmia induction. During testing, atrial tachyarrhythmias were induced first, followed by ventricular arrhythmia induction. Induced and spontaneous SVT and VT/ventricular fibrillation (VF) episodes recorded during the trial were annotated by physician investigators. The mean age of the patients implanted with an ICD was 67.3 +/- 10.8 years. Eighty-one percent of patients were male. The primary cardiovascular disease was coronary artery disease, and the primary tachyarrhythmia was monomorphic VT. Implementation of the Rhythm ID algorithm did not affect the VT/VF detection time. There were a total of 370 ventricular tachyarrhythmias (277 induced and 93 spontaneous) and 441 SVT episodes (168 induced and 273 spontaneous). Sensitivity for ventricular tachyarrhythmias was 100%, and specificity for SVT was 92% (94% and 91% for induced and spontaneous SVT, respectively). All patients had a successful manual Rhythm ID acquisition prior to atrial tachyarrhythmia induction. At the 1-month follow-up, the Rhythm ID references were updated automatically an average of 167.8 +/- 122.7 times. Stored Rhythm ID references correlated to patients' normally conducted rhythm 100% at 2 weeks, and 98% at 1 month. The Rhythm ID algorithm achieved 100% sensitivity for VT/VF, and 92% specificity for SVT. The manual

  20. Algorithms for Moving Object Detection: YSTAR-NEOPAT Survey Program

    Directory of Open Access Journals (Sweden)

    Young-Ho Bae

    2005-12-01

    Full Text Available We developed and compared two automatic algorithms for moving object detections in the YSTAR-NEOPAT sky survey program. One method, called starlist comparison method, is to identify moving object candidates by comparing the photometry data tables from successive images. Another method, called image subtraction method, is to identify the candidates by subtracting one image from another which isolates sources moving against background stars. The efficiency and accuracy of these algorithms have been tested using actual survey data from the YSTAR-NEOPAT telescope system. For the detected candidates, we performed eyeball inspection of animated images to confirm validity of asteroid detections. Main conclusions include followings. First, the optical distortion in the YSTAR-NEOPAT wide-field images can be properly corrected by comparison with USNO-B1.0 catalog and the astrometric accuracy can be preserved at around 1.5 arcsec. Secondly, image subtraction provides more robust and accurate detection of moving objects. For two different thresholds of 2.0 and 4.0σ, image subtraction method uncovered 34 and 12 candidates and most of them are confirmed to be real. Starlist comparison method detected many more candidates, 60 and 6 for each threshold level, but nearly half of them turned out to be false detections.

  1. Stochastic Resonance algorithms to enhance damage detection in bearing faults

    Directory of Open Access Journals (Sweden)

    Castiglione Roberto

    2015-01-01

    Full Text Available Stochastic Resonance is a phenomenon, studied and mainly exploited in telecommunication, which permits the amplification and detection of weak signals by the assistance of noise. The first papers on this technique are dated early 80 s and were developed to explain the periodically recurrent ice ages. Other applications mainly concern neuroscience, biology, medicine and obviously signal analysis and processing. Recently, some researchers have applied the technique for detecting faults in mechanical systems and bearings. In this paper, we try to better understand the conditions of applicability and which is the best algorithm to be adopted for these purposes. In fact, to get the methodology profitable and efficient to enhance the signal spikes due to fault in rings and balls/rollers of bearings, some parameters have to be properly selected. This is a problem since in system identification this procedure should be as blind as possible. Two algorithms are analysed: the first exploits classical SR with three parameters mutually dependent, while the other uses Woods-Saxon potential, with three parameters yet but holding a different meaning. The comparison of the performances of the two algorithms and the optimal choice of their parameters are the scopes of this paper. Algorithms are tested on simulated and experimental data showing an evident capacity of increasing the signal to noise ratio.

  2. Circle Detection Using an Electromagnetism-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    Cuevas E.

    2011-10-01

    Full Text Available The Physic-inspired computation is becoming popular and has been acknowledged by the scientific community. This emerging area has developed a wide range of techniques and methods for dealing with complex problems. On the other hand, automatic circle detection in digital images has been considered as an important and complex task for the computer vision community that has devoted a tremendous amount of research seeking for an optimal circle detector. This article presents an algorithm for the automatic detection of circular shapes embedded into complicated and noisy images with no consideration of the conventional Hough transform techniques. The approach is based on a nature-inspired technique called the Electromagnetism- Like Optimization (EMO which is a heuristic method following electromagnetism principles for solving complex optimization problems. For the EMO algorithm, solutions are built considering the electromagnetic attraction and repulsion among charged particles with a charge representing the fitness solution for each particle. The algorithm uses the encoding of three non-collinear points as candidate circles over an edge-only image. Guided by the values of the objective function, the set of encoded candidate circles (charged particles are evolved using the EMO algorithm so that they can fit into the actual circles on the edge map of the image. Experimental results from several tests on synthetic and natural images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding accuracy, speed, and robustness.

  3. SEU-tolerant IQ detection algorithm for LLRF accelerator system

    Science.gov (United States)

    Grecki, M.

    2007-08-01

    High-energy accelerators use RF field to accelerate charged particles. Measurements of effective field parameters (amplitude and phase) are tasks of great importance in these facilities. The RF signal is downconverted in frequency but keeping the information about amplitude and phase and then sampled in ADC. One of the several tasks for LLRF control system is to estimate the amplitude and phase (or I and Q components) of the RF signal. These parameters are further used in the control algorithm. The XFEL accelerator will be built using a single-tunnel concept. Therefore electronic devices (including LLRF control system) will be exposed to ionizing radiation, particularly to a neutron flux generating SEUs in digital circuits. The algorithms implemented in FPGA/DSP should therefore be SEU-tolerant. This paper presents the application of the WCC method to obtain immunity of IQ detection algorithm to SEUs. The VHDL implementation of this algorithm in Xilinx Virtex II Pro FPGA is presented, together with results of simulation proving the algorithm suitability for systems operating in the presence of SEUs.

  4. Comparison of machine learning algorithms for detecting coral reef

    Directory of Open Access Journals (Sweden)

    Eduardo Tusa

    2014-09-01

    Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.

  5. Comparative Evaluation of Community Detection Algorithms: A Topological Approach

    OpenAIRE

    Orman, Günce,; Labatut, Vincent; Cherifi, Hocine

    2012-01-01

    International audience; Community detection is one of the most active fields in complex networks analysis, due to its potential value in practical applications. Many works inspired by different paradigms are devoted to the development of algorithmic solutions allowing to reveal the network structure in such cohesive subgroups. Comparative studies reported in the literature usually rely on a performance measure considering the community structure as a partition (Rand Index, Normalized Mutual i...

  6. Polio vaccines, SV40 and human tumours, an update on false positive and false negative results.

    Science.gov (United States)

    Elmishad, A G; Bocchetta, M; Pass, H I; Carbone, M

    2006-01-01

    Simian virus 40 (SV40) has been detected in different human tumours in numerous laboratories. The detection of SV40 in human tumours has been linked to the administration of SV40-contaminated polio vaccines from 1954 until 1963. Many of these reports linked SV40 to human mesothelioma. Some studies have failed to detect SV40 in human tumours and this has caused a controversy. Here we review the current literature. Moreover, we present evidence showing how differences in the sensitivities of methodologies can lead to a very different interpretation of the same study. The same 20 mesothelioma specimens all tested negative, 2/20 tested positive or 7/20 tested positive for SV40 Tag by simply changing the detection method on the same immuno-precipitation/western blot membranes. These results provide a simple explanation for some of the apparent discordant results reported in the literature.

  7. A Study of Lane Detection Algorithm for Personal Vehicle

    Science.gov (United States)

    Kobayashi, Kazuyuki; Watanabe, Kajiro; Ohkubo, Tomoyuki; Kurihara, Yosuke

    By the word “Personal vehicle”, we mean a simple and lightweight vehicle expected to emerge as personal ground transportation devices. The motorcycle, electric wheelchair, motor-powered bicycle, etc. are examples of the personal vehicle and have been developed as the useful for transportation for a personal use. Recently, a new types of intelligent personal vehicle called the Segway has been developed which is controlled and stabilized by using on-board intelligent multiple sensors. The demand for needs for such personal vehicles are increasing, 1) to enhance human mobility, 2) to support mobility for elderly person, 3) reduction of environmental burdens. Since rapidly growing personal vehicles' market, a number of accidents caused by human error is also increasing. The accidents are caused by it's drive ability. To enhance or support drive ability as well as to prevent accidents, intelligent assistance is necessary. One of most important elemental functions for personal vehicle is robust lane detection. In this paper, we develop a robust lane detection method for personal vehicle at outdoor environments. The proposed lane detection method employing a 360 degree omni directional camera and unique robust image processing algorithm. In order to detect lanes, combination of template matching technique and Hough transform are employed. The validity of proposed lane detection algorithm is confirmed by actual developed vehicle at various type of sunshined outdoor conditions.

  8. Clairvoyant fusion: a new methodology for designing robust detection algorithms

    Science.gov (United States)

    Schaum, Alan

    2016-10-01

    Many realistic detection problems cannot be solved with simple statistical tests for known alternative probability models. Uncontrollable environmental conditions, imperfect sensors, and other uncertainties transform simple detection problems with likelihood ratio solutions into composite hypothesis (CH) testing problems. Recently many multi- and hyperspectral sensing CH problems have been addressed with a new approach. Clairvoyant fusion (CF) integrates the optimal detectors ("clairvoyants") associated with every unspecified value of the parameters appearing in a detection model. For problems with discrete parameter values, logical rules emerge for combining the decisions of the associated clairvoyants. For many problems with continuous parameters, analytic methods of CF have been found that produce closed-form solutions-or approximations for intractable problems. Here the principals of CF are reviewed and mathematical insights are described that have proven useful in the derivation of solutions. It is also shown how a second-stage fusion procedure can be used to create theoretically superior detection algorithms for ALL discrete parameter problems.

  9. Oltar sv. Wolfganga u Vukovoju

    OpenAIRE

    Repanić Braun, Mirjana; Škarić, Ksenija; Wolff Zubović, Martina; Cavalli Ladašić, Helena

    2013-01-01

    Od 2004. do 2010. godine u Hrvatskom restauratorskom zavodu restauriran je oltar sv. Wolfganga iz kapele u Vukovoju, jedinstveni primjer bogato kićenog manirističkog oltara iz sredine 17. stoljeća u kontinentalnoj Hrvatskoj koji je u neznatno preinačenom stanju sačuvan na izvornom mjestu. Oltar se tradicionalno datira u 1650. godinu, zahvaljujući natpisu ispod slike s drugoga kata. Konzervatorsko-restauratorska istraživanja pokazala su da taj natpis ne pripada polikromiji 17. stoljeća, već je...

  10. Oscillation Detection Algorithm Development Summary Report and Test Plan

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    -based modal analysis algorithms have been developed. They include Prony analysis, Regularized Ro-bust Recursive Least Square (R3LS) algorithm, Yule-Walker algorithm, Yule-Walker Spectrum algorithm, and the N4SID algo-rithm. Each has been shown to be effective for certain situations, but not as effective for some other situations. For example, the traditional Prony analysis works well for disturbance data but not for ambient data, while Yule-Walker is designed for ambient data only. Even in an algorithm that works for both disturbance data and ambient data, such as R3LS, latency results from the time window used in the algorithm is an issue in timely estimation of oscillation modes. For ambient data, the time window needs to be longer to accumulate information for a reasonably accurate estimation; while for disturbance data, the time window can be significantly shorter so the latency in estimation can be much less. In addition, adding a known input signal such as noise probing signals can increase the knowledge of system oscillatory properties and thus improve the quality of mode estimation. System situations change over time. Disturbances can occur at any time, and probing signals can be added for a certain time period and then removed. All these observations point to the need to add intelligence to ModeMeter applications. That is, a ModeMeter needs to adaptively select different algorithms and adjust parameters for various situations. This project aims to develop systematic approaches for algorithm selection and parameter adjustment. The very first step is to detect occurrence of oscillations so the algorithm and parameters can be changed accordingly. The proposed oscillation detection approach is based on the signal-noise ratio of measurements.

  11. Extended seizure detection algorithm for intracranial EEG recordings

    DEFF Research Database (Denmark)

    Kjaer, T. W.; Remvig, L. S.; Henriksen, J.

    2010-01-01

    Objective: We implemented and tested an existing seizure detection algorithm for scalp EEG (sEEG) with the purpose of improving it to intracranial EEG (iEEG) recordings. Method: iEEG was obtained from 16 patients with focal epilepsy undergoing work up for resective epilepsy surgery. Each patient...... and non-ictal iEEG. We compare our results to a method published by Shoeb in 2004. While the original method on sEEG was optimal with the use of only four subbands in the wavelet analysis, we found that better seizure detection could be made if all subbands were used for iEEG. Results: When using...... the original implementation a sensitivity of 92.8% and a false positive ratio (FPR) of 0.93/h were obtained. Our extension of the algorithm rendered a 95.9% sensitivity and only 0.65 false detections per hour. Conclusion: Better seizure detection can be performed when the higher frequencies in the iEEG were...

  12. Linear segmentation algorithm for detecting layer boundary with lidar.

    Science.gov (United States)

    Mao, Feiyue; Gong, Wei; Logan, Timothy

    2013-11-04

    The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.

  13. Software Piracy Detection Model Using Ant Colony Optimization Algorithm

    Science.gov (United States)

    Astiqah Omar, Nor; Zakuan, Zeti Zuryani Mohd; Saian, Rizauddin

    2017-06-01

    Internet enables information to be accessible anytime and anywhere. This scenario creates an environment whereby information can be easily copied. Easy access to the internet is one of the factors which contribute towards piracy in Malaysia as well as the rest of the world. According to a survey conducted by Compliance Gap BSA Global Software Survey in 2013 on software piracy, found out that 43 percent of the software installed on PCs around the world was not properly licensed, the commercial value of the unlicensed installations worldwide was reported to be 62.7 billion. Piracy can happen anywhere including universities. Malaysia as well as other countries in the world is faced with issues of piracy committed by the students in universities. Piracy in universities concern about acts of stealing intellectual property. It can be in the form of software piracy, music piracy, movies piracy and piracy of intellectual materials such as books, articles and journals. This scenario affected the owner of intellectual property as their property is in jeopardy. This study has developed a classification model for detecting software piracy. The model was developed using a swarm intelligence algorithm called the Ant Colony Optimization algorithm. The data for training was collected by a study conducted in Universiti Teknologi MARA (Perlis). Experimental results show that the model detection accuracy rate is better as compared to J48 algorithm.

  14. Actual Pathogen Detection: Sensors and Algorithms - a Review

    Directory of Open Access Journals (Sweden)

    Federico Hahn

    2009-03-01

    Full Text Available Pathogens feed on fruits and vegetables causing great food losses or at least reduction of their shelf life. These pathogens can cause losses of the final product or in the farms were the products are grown, attacking leaves, stems and trees. This review analyses disease detection sensors and algorithms for both the farm and postharvest management of fruit and vegetable quality. Mango, avocado, apple, tomato, potato, citrus and grapes were selected as the fruits and vegetables for study due to their world-wide consumption. Disease warning systems for predicting pathogens and insects on farms during fruit and vegetable production are commonly used for all the crops and are available where meteorological stations are present. It can be seen that these disease risk systems are being slowly replaced by remote sensing monitoring in developed countries. Satellite images have reduced their temporal resolution, but are expensive and must become cheaper for their use world-wide. In the last 30 years, a lot of research has been carried out in non-destructive sensors for food quality. Actually, non-destructive technology has been applied for sorting high quality fruit which is desired by the consumer. The sensors require algorithms to work properly; the most used being discriminant analysis and training neural networks. New algorithms will be required due to the high quantity of data acquired and its processing, and for disease warning strategies for disease detection.

  15. Incremental refinement of a multi-user-detection algorithm (II

    Directory of Open Access Journals (Sweden)

    M. Vollmer

    2003-01-01

    Full Text Available Multi-user detection is a technique proposed for mobile radio systems based on the CDMA principle, such as the upcoming UMTS. While offering an elegant solution to problems such as intra-cell interference, it demands very significant computational resources. In this paper, we present a high-level approach for reducing the required resources for performing multi-user detection in a 3GPP TDD multi-user system. This approach is based on a displacement representation of the parameters that describe the transmission system, and a generalized Schur algorithm that works on this representation. The Schur algorithm naturally leads to a highly parallel hardware implementation using CORDIC cells. It is shown that this hardware architecture can also be used to compute the initial displacement representation. It is very beneficial to introduce incremental refinement structures into the solution process, both at the algorithmic level and in the individual cells of the hardware architecture. We detail these approximations and present simulation results that confirm their effectiveness.

  16. A Duffing oscillator algorithm to detect the weak chromatographic signal.

    Science.gov (United States)

    Zhang, Wei; Xiang, Bing-Ren

    2007-02-28

    Based on the Duffing equation, a Duffing oscillator algorithm (DOA) to improve the signal-to-noise ratio (SNR) was presented. By simulated and experimental data sets, it was proven that the signal-to-noise ratio (SNR) of the weak signal could be greatly enhanced by this method. Using signal enhancement by DOA, this method extends the SNR of low concentrations of methylbenzene from 2.662 to 29.90 and the method can be used for quantitative analysis of methylbenzene, which are lower than detection limit of an analytical system. The Duffing oscillator algorithm (DOA) might be a promising tool to extend instrumental linear range and to improve the accuracy of trace analysis. The research enlarged the application scope of Duffing equation to chromatographic signal processing.

  17. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm.

    Science.gov (United States)

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-10-28

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space.

  18. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Dong-Han Lee

    2017-10-01

    Full Text Available This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD based feature extraction, in conjunction with particle swarm optimization (PSO, principal component analysis (PCA, and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space.

  19. Static and Dynamic Pedestrian Detection Algorithm for Visual Based Driver Assistive System

    Directory of Open Access Journals (Sweden)

    Bush Idoko John

    2017-01-01

    Full Text Available This paper presents a new pedestrian detection algorithm used in Advanced Driver-Assistance System with only one camera aiming to improving traffic safety. The new pedestrian detection algorithm differs from traditional pedestrian detection algorithm, which only focuses on pedestrian detection rate or pedestrian detection accuracy. Conversely, the proposed algorithm focuses on both the accuracy and the rate. Some new features are proposed to improve pedestrian detection rate of the system. Also color difference was used to decrease the false detecting rate. The experimental results show that the pedestrian detection rate can be around 90% and the false detecting rate is 3%.

  20. AN EFFICIENT PEAK VALLEY DETECTION BASED VAD ALGORITHM FOR ROBUST DETECTION OF SPEECH AUDITORY BRAINSTEM RESPONSES

    OpenAIRE

    Ranganadh Narayanam

    2013-01-01

    Voice Activity Detection (VAD) problem considers detecting the presence of speech in a noisy signal. The speech/non-speech classification task is not as trivial as it appears, and most of the VAD algorithms fail when the level of background noise increases. In this research we are presenting a new technique for Voice Activity Detection (VAD) in EEG collected brain stem speech evoked potentials data [7, 8, 9]. This one is spectral subtraction method in which we have developed ou...

  1. Airport Traffic Conflict Detection and Resolution Algorithm Evaluation

    Science.gov (United States)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Ballard, Kathryn M.; Otero, Sharon D.; Barker, Glover D.

    2016-01-01

    Two conflict detection and resolution (CD&R) algorithms for the terminal maneuvering area (TMA) were evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. One CD&R algorithm, developed at NASA, was designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The second algorithm, Enhanced Traffic Situation Awareness on the Airport Surface with Indications and Alerts (SURF IA), was designed to increase flight crew awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the aircraft-based CD&R algorithms during various runway, taxiway, and low altitude scenarios, multiple levels of CD&R system equipage, and various levels of horizontal position accuracy. Algorithm performance was assessed through various metrics including the collision rate, nuisance and missed alert rate, and alert toggling rate. The data suggests that, in general, alert toggling, nuisance and missed alerts, and unnecessary maneuvering occurred more frequently as the position accuracy was reduced. Collision avoidance was more effective when all of the aircraft were equipped with CD&R and maneuvered to avoid a collision after an alert was issued. In order to reduce the number of unwanted (nuisance) alerts when taxiing across a runway, a buffer is needed between the hold line and the alerting zone so alerts are not generated when an aircraft is behind the hold line. All of the results support RTCA horizontal position accuracy requirements for performing a CD&R function to reduce the likelihood and severity of runway incursions and collisions.

  2. Comparing Several Algorithms for Change Detection of Wetland

    Science.gov (United States)

    Yan, F.; Zhang, S.; Chang, L.

    2015-12-01

    As "the kidneys of the landscape" and "ecological supermarkets", wetland plays an important role in ecological equilibrium and environmental protection.Therefore, it is of great significance to understand the dynamic changes of the wetland. Nowadays, many index and many methods have been used in dynamic Monitoring of Wetland. However, there are no single method and no single index are adapted to detect dynamic change of wetland all over the world. In this paper, three digital change detection algorithms are applied to 2005 and 2010 Landsat Thematic Mapper (TM) images of a portion of the Northeast China to detect wetland dynamic between the two dates. The change vector analysis method (CVA) uses 6 bands of TM images to detect wetland dynamic. The tassled cap transformation is used to create three change images (change in brightness, greenness, and wetness). A new method--- Comprehensive Change Detection Method (CCDM) is introduced to detect forest dynamic change. The CCDM integrates spectral-based change detection algorithms including a Multi-Index Integrated Change Analysis (MIICA) model and a novel change model called Zone, which extracts change information from two Landsat image pairs. The MIICA model is the core module of the change detection strategy and uses four spectral indices (differenced Normalized Burn Ratio (dNBR), differenced Normalized Difference Vegetation Index (dNDVI), the Change Vector (CV) and a new index called the Relative Change Vector Maximum (RCVMAX)) to obtain the changes that occurred between two image dates. The CCDM also includes a knowledge-based system, which uses critical information on historical and current land cover conditions and trends and the likelihood of land cover change, to combine the changes from MIICA and Zone. Related test proved that CCDM method is simple, easy to operate, widely applicable, and capable of capturing a variety of natural and anthropogenic disturbances potentially associated with land cover changes on

  3. EEG seizure detection and prediction algorithms: a survey

    Science.gov (United States)

    Alotaiby, Turkey N.; Alshebeili, Saleh A.; Alshawi, Tariq; Ahmad, Ishtiaq; Abd El-Samie, Fathi E.

    2014-12-01

    Epilepsy patients experience challenges in daily life due to precautions they have to take in order to cope with this condition. When a seizure occurs, it might cause injuries or endanger the life of the patients or others, especially when they are using heavy machinery, e.g., deriving cars. Studies of epilepsy often rely on electroencephalogram (EEG) signals in order to analyze the behavior of the brain during seizures. Locating the seizure period in EEG recordings manually is difficult and time consuming; one often needs to skim through tens or even hundreds of hours of EEG recordings. Therefore, automatic detection of such an activity is of great importance. Another potential usage of EEG signal analysis is in the prediction of epileptic activities before they occur, as this will enable the patients (and caregivers) to take appropriate precautions. In this paper, we first present an overview of seizure detection and prediction problem and provide insights on the challenges in this area. Second, we cover some of the state-of-the-art seizure detection and prediction algorithms and provide comparison between these algorithms. Finally, we conclude with future research directions and open problems in this topic.

  4. Oil metal particles Detection Algorithm Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Shang Wei

    2017-01-01

    Full Text Available In order to observe the real-time abrasion status of the aero-engine, we need to monitor the lubrication system online. As the aero-engine operating time and running state changes, the concentration, composition, size and other parameters of the metal debris can show different changes. They can be used as an important indicator to reflect the state of the aero-engine fault. However, due to the influence of electromagnetic, vibration disturbance and random noise signal introduced by the processing unit itself, the metal particles signal tend to comprise noise. Oil metal particles detection algorithm based on wavelet transform, utilizes the optimized localized nature in time domain and frequency domain of wavelet transform and the characteristics of multi-resolution analysis, combined with the signal characteristics in actual aero-engine condition to realize noise reduction and detection, while validating the algorithm using real experimental data. The result shows that noise can be effectively decreased and signal characteristics can be detected correctly.

  5. Algorithm for detecting violations of traffic rules based on computer vision approaches

    Directory of Open Access Journals (Sweden)

    Ibadov Samir

    2017-01-01

    Full Text Available We propose a new algorithm for automatic detect violations of traffic rules for improving the people safety on the unregulated pedestrian crossing. The algorithm uses multi-step proceedings. They are zebra detection, cars detection, and pedestrian detection. For car detection, we use faster R-CNN deep learning tool. The algorithm shows promising results in the detection violations of traffic rules.

  6. The Automated Assessment of Postural Stability: Balance Detection Algorithm.

    Science.gov (United States)

    Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad

    2017-12-01

    Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0

  7. Fast Parabola Detection Using Estimation of Distribution Algorithms

    Science.gov (United States)

    Sierra-Hernandez, Juan Manuel; Avila-Garcia, Maria Susana; Rojas-Laguna, Roberto

    2017-01-01

    This paper presents a new method based on Estimation of Distribution Algorithms (EDAs) to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications. PMID:28321264

  8. Fast Parabola Detection Using Estimation of Distribution Algorithms

    Directory of Open Access Journals (Sweden)

    Jose de Jesus Guerrero-Turrubiates

    2017-01-01

    Full Text Available This paper presents a new method based on Estimation of Distribution Algorithms (EDAs to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications.

  9. NASA airborne radar wind shear detection algorithm and the detection of wet microbursts in the vicinity of Orlando, Florida

    Science.gov (United States)

    Britt, Charles L.; Bracalente, Emedio M.

    1992-01-01

    The algorithms used in the NASA experimental wind shear radar system for detection, characterization, and determination of windshear hazard are discussed. The performance of the algorithms in the detection of wet microbursts near Orlando is presented. Various suggested algorithms that are currently being evaluated using the flight test results from Denver and Orlando are reviewed.

  10. Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.

    Science.gov (United States)

    Giridhar, K.

    The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal

  11. Enhancing Time-Series Detection Algorithms for Automated Biosurveillance

    Science.gov (United States)

    Burkom, Howard; Xing, Jian; English, Roseanne; Bloom, Steven; Cox, Kenneth; Pavlin, Julie A.

    2009-01-01

    BioSense is a US national system that uses data from health information systems for automated disease surveillance. We studied 4 time-series algorithm modifications designed to improve sensitivity for detecting artificially added data. To test these modified algorithms, we used reports of daily syndrome visits from 308 Department of Defense (DoD) facilities and 340 hospital emergency departments (EDs). At a constant alert rate of 1%, sensitivity was improved for both datasets by using a minimum standard deviation (SD) of 1.0, a 14–28 day baseline duration for calculating mean and SD, and an adjustment for total clinic visits as a surrogate denominator. Stratifying baseline days into weekdays versus weekends to account for day-of-week effects increased sensitivity for the DoD data but not for the ED data. These enhanced methods may increase sensitivity without increasing the alert rate and may improve the ability to detect outbreaks by using automated surveillance system data. PMID:19331728

  12. Weighted compactness function based label propagation algorithm for community detection

    Science.gov (United States)

    Zhang, Weitong; Zhang, Rui; Shang, Ronghua; Jiao, Licheng

    2018-02-01

    Community detection in complex networks, is to detect the community structure with the internal structure relatively compact and the external structure relatively sparse, according to the topological relationship among nodes in the network. In this paper, we propose a compactness function which combines the weight of nodes, and use it as the objective function to carry out the node label propagation. Firstly, according to the node degree, we find the sets of core nodes which have great influence on the network. The more the connections between the core nodes and the other nodes are, the larger the amount of the information these kernel nodes receive and transform. Then, according to the similarity of the nodes between the core nodes sets and the nodes degree, we assign weights to the nodes in the network. So the label of the nodes with great influence will be the priority in the label propagation process, which effectively improves the accuracy of the label propagation. The compactness function between nodes and communities in this paper is based on the nodes influence. It combines the connections between nodes and communities with the degree of the node belongs to its neighbor communities based on calculating the node weight. The function effectively uses the information of nodes and connections in the network. The experimental results show that the proposed algorithm can achieve good results in the artificial network and large-scale real networks compared with the 8 contrast algorithms.

  13. Network intrusion detection by the coevolutionary immune algorithm of artificial immune systems with clonal selection

    Science.gov (United States)

    Salamatova, T.; Zhukov, V.

    2017-02-01

    The paper presents the application of the artificial immune systems apparatus as a heuristic method of network intrusion detection for algorithmic provision of intrusion detection systems. The coevolutionary immune algorithm of artificial immune systems with clonal selection was elaborated. In testing different datasets the empirical results of evaluation of the algorithm effectiveness were achieved. To identify the degree of efficiency the algorithm was compared with analogs. The fundamental rules based of solutions generated by this algorithm are described in the article.

  14. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms.

    Science.gov (United States)

    Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus

    2017-04-01

    Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.

  15. A simplified Suomi NPP VIIRS dust detection algorithm

    Science.gov (United States)

    Yang, Yikun; Sun, Lin; Zhu, Jinshan; Wei, Jing; Su, Qinghua; Sun, Wenxiao; Liu, Fangwei; Shu, Meiyan

    2017-11-01

    Due to the complex characteristics of dust and sparse ground-based monitoring stations, dust monitoring is facing severe challenges, especially in dust storm-prone areas. Aim at constructing a high-precision dust storm detection model, a pixel database, consisted of dusts over a variety of typical feature types such as cloud, vegetation, Gobi and ice/snow, was constructed, and their distributions of reflectance and Brightness Temperatures (BT) were analysed, based on which, a new Simplified Dust Detection Algorithm (SDDA) for the Suomi National Polar-Orbiting Partnership Visible infrared Imaging Radiometer (NPP VIIRS) is proposed. NPP VIIRS images covering the northern China and Mongolian regions, where features serious dust storms, were selected to perform the dust detection experiments. The monitoring results were compared with the true colour composite images, and results showed that most of the dust areas can be accurately detected, except for fragmented thin dusts over bright surfaces. The dust ground-based measurements obtained from the Meteorological Information Comprehensive Analysis and Process System (MICAPS) and the Ozone Monitoring Instrument Aerosol Index (OMI AI) products were selected for comparison purposes. Results showed that the dust monitoring results agreed well in the spatial distribution with OMI AI dust products and the MICAPS ground-measured data with an average high accuracy of 83.10%. The SDDA is relatively robust and can realize automatic monitoring for dust storms.

  16. Bio Inspired Swarm Algorithm for Tumor Detection in Digital Mammogram

    Science.gov (United States)

    Dheeba, J.; Selvi, Tamil

    Microcalcification clusters in mammograms is the significant early sign of breast cancer. Individual clusters are difficult to detect and hence an automatic computer aided mechanism will help the radiologist in detecting the microcalcification clusters in an easy and efficient way. This paper presents a new classification approach for detection of microcalcification in digital mammogram using particle swarm optimization algorithm (PSO) based clustering technique. Fuzzy C-means clustering technique, well defined for clustering data sets are used in combination with the PSO. We adopt the particle swarm optimization to search the cluster center in the arbitrary data set automatically. PSO can search the best solution from the probability option of the Social-only model and Cognition-only model. This method is quite simple and valid, and it can avoid the minimum local value. The proposed classification approach is applied to a database of 322 dense mammographic images, originating from the MIAS database. Results shows that the proposed PSO-FCM approach gives better detection performance compared to conventional approaches.

  17. An algorithm of local earthquake detection from digital records

    Directory of Open Access Journals (Sweden)

    A. PROZOROV

    1978-06-01

    Full Text Available The problem of automatical detection of earthquake signals in seismograms
    and definition of first arrivals of p and s waves is considered.
    The algorithm is based on the analysis of t(A function which represents
    the time of first appearence of a number of going one after another
    swings of amplitudes greather than A in seismic signals. It allows to explore
    such common features of seismograms of earthquakes as sudden
    first p-arrivals of amplitude greater than general amplitude of noise and
    after the definite interval of time before s-arrival the amplitude of which
    overcomes the amplitude of p-arrival. The method was applied to
    3-channel recods of Friuli aftershocks, ¿'-arrivals were defined correctly
    in all cases; p-arrivals were defined in most cases using strict criteria of
    detection. Any false signals were not detected. All p-arrivals were defined
    using soft criteria of detection but less reliability and two false events
    were obtained.

  18. An algorithm J-SC of detecting communities in complex networks

    Science.gov (United States)

    Hu, Fang; Wang, Mingzhu; Wang, Yanran; Hong, Zhehao; Zhu, Yanhui

    2017-11-01

    Currently, community detection in complex networks has become a hot-button topic. In this paper, based on the Spectral Clustering (SC) algorithm, we introduce the idea of Jacobi iteration, and then propose a novel algorithm J-SC for community detection in complex networks. Furthermore, the accuracy and efficiency of this algorithm are tested by some representative real-world networks and several computer-generated networks. The experimental results indicate that the J-SC algorithm can accurately and effectively detect the community structure in these networks. Meanwhile, compared with the state-of-the-art community detecting algorithms SC, SOM, K-means, Walktrap and Fastgreedy, the J-SC algorithm has better performance, reflecting that this new algorithm can acquire higher values of modularity and NMI. Moreover, this new algorithm has faster running time than SOM and Walktrap algorithms.

  19. Archaeomagnetic SV curve for Belgium

    Science.gov (United States)

    Ech-chakrouni, Souad; Hus, Jozef

    2017-04-01

    Archaeomagnetic secular variation curves have been established for different countries in Europe, especially when different archeological sites are more or less uniformly distributed in time are available. The disadvantage in that case is that data had to be relocated to a single reference site. The proximity of the reference locality Paris to Belgium makes that we used the French archaeomagnetic SV curve for the last three millennia up to the present for archaeomagnetic dating undated baked structures. In total, 85 baked structures have been examined, unearthed in 24 archaeological sites of the territory of Belgium. The ChRM of each sample was obtained by principal component analysis for at least three demagnetisation steps (Kirschvink 1980). Except for some outliers, the ChRM directions are very coherent with a high confidence factor (α95Belgium with Uccle as reference locality, where the first measurement of the geomagnetic field was done in 1895. This curve would include all the available reference data in a radius of about 500 km around Uccle. Keywords: secular variation, archaeomagnetic dating, Belgium.

  20. A smart local moving algorithm for large-scale modularity-based community detection

    CERN Document Server

    Waltman, Ludo

    2013-01-01

    We introduce a new algorithm for modularity-based community detection in large networks. The algorithm, which we refer to as a smart local moving algorithm, takes advantage of a well-known local moving heuristic that is also used by other algorithms. Compared with these other algorithms, our proposed algorithm uses the local moving heuristic in a more sophisticated way. Based on an analysis of a diverse set of networks, we show that our smart local moving algorithm identifies community structures with higher modularity values than other algorithms for large-scale modularity optimization, among which the popular 'Louvain algorithm' introduced by Blondel et al. (2008). The computational efficiency of our algorithm makes it possible to perform community detection in networks with tens of millions of nodes and hundreds of millions of edges. Our smart local moving algorithm also performs well in small and medium-sized networks. In short computing times, it identifies community structures with modularity values equ...

  1. Cable Damage Detection System and Algorithms Using Time Domain Reflectometry

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G A; Robbins, C L; Wade, K A; Souza, P R

    2009-03-24

    This report describes the hardware system and the set of algorithms we have developed for detecting damage in cables for the Advanced Development and Process Technologies (ADAPT) Program. This program is part of the W80 Life Extension Program (LEP). The system could be generalized for application to other systems in the future. Critical cables can undergo various types of damage (e.g. short circuits, open circuits, punctures, compression) that manifest as changes in the dielectric/impedance properties of the cables. For our specific problem, only one end of the cable is accessible, and no exemplars of actual damage are available. This work addresses the detection of dielectric/impedance anomalies in transient time domain reflectometry (TDR) measurements on the cables. The approach is to interrogate the cable using time domain reflectometry (TDR) techniques, in which a known pulse is inserted into the cable, and reflections from the cable are measured. The key operating principle is that any important cable damage will manifest itself as an electrical impedance discontinuity that can be measured in the TDR response signal. Machine learning classification algorithms are effectively eliminated from consideration, because only a small number of cables is available for testing; so a sufficient sample size is not attainable. Nonetheless, a key requirement is to achieve very high probability of detection and very low probability of false alarm. The approach is to compare TDR signals from possibly damaged cables to signals or an empirical model derived from reference cables that are known to be undamaged. This requires that the TDR signals are reasonably repeatable from test to test on the same cable, and from cable to cable. Empirical studies show that the repeatability issue is the 'long pole in the tent' for damage detection, because it is has been difficult to achieve reasonable repeatability. This one factor dominated the project. The two-step model

  2. Design of infrasound-detection system via adaptive LMSTDE algorithm

    Science.gov (United States)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  3. StralSV: assessment of sequence variability within similar 3D structures and application to polio RNA-dependent RNA polymerase

    Energy Technology Data Exchange (ETDEWEB)

    Zemla, A; Lang, D; Kostova, T; Andino, R; Zhou, C

    2010-11-29

    Most of the currently used methods for protein function prediction rely on sequence-based comparisons between a query protein and those for which a functional annotation is provided. A serious limitation of sequence similarity-based approaches for identifying residue conservation among proteins is the low confidence in assigning residue-residue correspondences among proteins when the level of sequence identity between the compared proteins is poor. Multiple sequence alignment methods are more satisfactory - still, they cannot provide reliable results at low levels of sequence identity. Our goal in the current work was to develop an algorithm that could overcome these difficulties and facilitate the identification of structurally (and possibly functionally) relevant residue-residue correspondences between compared protein structures. Here we present StralSV, a new algorithm for detecting closely related structure fragments and quantifying residue frequency from tight local structure alignments. We apply StralSV in a study of the RNA-dependent RNA polymerase of poliovirus and demonstrate that the algorithm can be used to determine regions of the protein that are relatively unique or that shared structural similarity with structures that are distantly related. By quantifying residue frequencies among many residue-residue pairs extracted from local alignments, one can infer potential structural or functional importance of specific residues that are determined to be highly conserved or that deviate from a consensus. We further demonstrate that considerable detailed structural and phylogenetic information can be derived from StralSV analyses. StralSV is a new structure-based algorithm for identifying and aligning structure fragments that have similarity to a reference protein. StralSV analysis can be used to quantify residue-residue correspondences and identify residues that may be of particular structural or functional importance, as well as unusual or unexpected

  4. Jitter Estimation Algorithms for Detection of Pathological Voices

    Directory of Open Access Journals (Sweden)

    Dárcio G. Silva

    2009-01-01

    Full Text Available This work is focused on the evaluation of different methods to estimate the amount of jitter present in speech signals. The jitter value is a measure of the irregularity of a quasiperiodic signal and is a good indicator of the presence of pathologies in the larynx such as vocal fold nodules or a vocal fold polyp. Given the irregular nature of the speech signal, each jitter estimation algorithm relies on its own model making a direct comparison of the results very difficult. For this reason, the evaluation of the different jitter estimation methods was target on their ability to detect pathological voices. Two databases were used for this evaluation: a subset of the MEEI database and a smaller database acquired in the scope of this work. The results showed that there were significant differences in the performance of the algorithms being evaluated. Surprisingly, in the largest database the best results were not achieved with the commonly used relative jitter, measured as a percentage of the glottal cycle, but with absolute jitter values measured in microseconds. Also, the new proposed measure for jitter, LocJitt, performed in general is equal to or better than the commonly used tools of MDVP and Praat.

  5. On the Formal Verification of Conflict Detection Algorithms

    Science.gov (United States)

    Munoz, Cesar; Butler, Ricky W.; Carreno, Victor A.; Dowek, Gilles

    2001-01-01

    Safety assessment of new air traffic management systems is a main issue for civil aviation authorities. Standard techniques such as testing and simulation have serious limitations in new systems that are significantly more autonomous than the older ones. In this paper, we present an innovative approach, based on formal verification, for establishing the correctness of conflict detection systems. Fundamental to our approach is the concept of trajectory, which is a continuous path in the x-y plane constrained by physical laws and operational requirements. From the Model of trajectories, we extract, and formally prove, high level properties that can serve as a framework to analyze conflict scenarios. We use the Airborne Information for Lateral Spacing (AILS) alerting algorithm as a case study of our approach.

  6. Modified screening and ranking algorithm for copy number variation detection

    Science.gov (United States)

    Xiao, Feifei; Min, Xiaoyi; Zhang, Heping

    2015-01-01

    Motivation: Copy number variation (CNV) is a type of structural variation, usually defined as genomic segments that are 1 kb or larger, which present variable copy numbers when compared with a reference genome. The screening and ranking algorithm (SaRa) was recently proposed as an efficient approach for multiple change-points detection, which can be applied to CNV detection. However, some practical issues arise from application of SaRa to single nucleotide polymorphism data. Results: In this study, we propose a modified SaRa on CNV detection to address these issues. First, we use the quantile normalization on the original intensities to guarantee that the normal mean model-based SaRa is a robust method. Second, a novel normal mixture model coupled with a modified Bayesian information criterion is proposed for candidate change-point selection and further clustering the potential CNV segments to copy number states. Simulations revealed that the modified SaRa became a robust method for identifying change-points and achieved better performance than the circular binary segmentation (CBS) method. By applying the modified SaRa to real data from the HapMap project, we illustrated its performance on detecting CNV segments. In conclusion, our modified SaRa method improves SaRa theoretically and numerically, for identifying CNVs with high-throughput genotyping data. Availability and Implementation: The modSaRa package is implemented in R program and freely available at http://c2s2.yale.edu/software/modSaRa. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25542927

  7. Simvastatin (SV) metabolites in mouse tissues

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, C.A.; Vickers, S. (Merck Sharp and Dohme Research Labs., West Point, PA (United States))

    1990-02-26

    SV, a semisynthetic analog of lovastatin, is hydrolyzed in vivo to its hydroxy acid (SVA), a potent inhibitor of HMG CoA reductase (HR). Thus SV lowers plasma cholesterol. SV is a substrate for mixed function oxidases whereas SVA undergoes lactonization and {beta}-oxidation. Male CD-1 mice were dosed orally with a combination of ({sup 14}C)SV and ({sup 3}H)SVA at 25 mg/kg of each, bled and killed at 0.5, 2 and 4 hours. Labeled SV, SVA, 6{prime}exomethylene SV (I), 6{prime}CH{sub 2}OH-SV (II), 6{prime}COOH-SV (III) and a {beta}-oxidized metabolite (IV) were assayed in liver, bile, kidneys, testes and plasma by RIDA. Levels of potential and active HR inhibitors in liver were 10 to 40 fold higher than in other tissues. II and III, in which the configuration at 6{prime} is inverted, may be 2 metabolites of I. Metabolites I-III are inhibitors of HR in their hydroxy acid forms. Qualitatively ({sup 14}C)SV and ({sup 3}H)SVA were metabolized similarly (consistent with their proposed interconversion). However {sup 3}H-SVA, I-III (including hydroxy acid forms) achieved higher concentrations than corresponding {sup 14}C compounds (except in gall bladder bile). Major radioactive metabolites in liver were II-IV (including hydroxy acid forms). These metabolites have also been reported in rat tissues. In bile a large fraction of either label was unidentified polar metabolites. The presence of IV indicated that mice (like rats) are not good models for SV metabolism in man.

  8. Automatic Detection and Quantification of WBCs and RBCs Using Iterative Structured Circle Detection Algorithm

    Science.gov (United States)

    Alomari, Yazan M.; Zaharatul Azma, Raja

    2014-01-01

    Segmentation and counting of blood cells are considered as an important step that helps to extract features to diagnose some specific diseases like malaria or leukemia. The manual counting of white blood cells (WBCs) and red blood cells (RBCs) in microscopic images is an extremely tedious, time consuming, and inaccurate process. Automatic analysis will allow hematologist experts to perform faster and more accurately. The proposed method uses an iterative structured circle detection algorithm for the segmentation and counting of WBCs and RBCs. The separation of WBCs from RBCs was achieved by thresholding, and specific preprocessing steps were developed for each cell type. Counting was performed for each image using the proposed method based on modified circle detection, which automatically counted the cells. Several modifications were made to the basic (RCD) algorithm to solve the initialization problem, detecting irregular circles (cells), selecting the optimal circle from the candidate circles, determining the number of iterations in a fully dynamic way to enhance algorithm detection, and running time. The validation method used to determine segmentation accuracy was a quantitative analysis that included Precision, Recall, and F-measurement tests. The average accuracy of the proposed method was 95.3% for RBCs and 98.4% for WBCs. PMID:24803955

  9. Automatic Detection and Quantification of WBCs and RBCs Using Iterative Structured Circle Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Yazan M. Alomari

    2014-01-01

    Full Text Available Segmentation and counting of blood cells are considered as an important step that helps to extract features to diagnose some specific diseases like malaria or leukemia. The manual counting of white blood cells (WBCs and red blood cells (RBCs in microscopic images is an extremely tedious, time consuming, and inaccurate process. Automatic analysis will allow hematologist experts to perform faster and more accurately. The proposed method uses an iterative structured circle detection algorithm for the segmentation and counting of WBCs and RBCs. The separation of WBCs from RBCs was achieved by thresholding, and specific preprocessing steps were developed for each cell type. Counting was performed for each image using the proposed method based on modified circle detection, which automatically counted the cells. Several modifications were made to the basic (RCD algorithm to solve the initialization problem, detecting irregular circles (cells, selecting the optimal circle from the candidate circles, determining the number of iterations in a fully dynamic way to enhance algorithm detection, and running time. The validation method used to determine segmentation accuracy was a quantitative analysis that included Precision, Recall, and F-measurement tests. The average accuracy of the proposed method was 95.3% for RBCs and 98.4% for WBCs.

  10. Accurate colon residue detection algorithm with partial volume segmentation

    Science.gov (United States)

    Li, Xiang; Liang, Zhengrong; Zhang, PengPeng; Kutcher, Gerald J.

    2004-05-01

    Colon cancer is the second leading cause of cancer-related death in the United States. Earlier detection and removal of polyps can dramatically reduce the chance of developing malignant tumor. Due to some limitations of optical colonoscopy used in clinic, many researchers have developed virtual colonoscopy as an alternative technique, in which accurate colon segmentation is crucial. However, partial volume effect and existence of residue make it very challenging. The electronic colon cleaning technique proposed by Chen et al is a very attractive method, which is also kind of hard segmentation method. As mentioned in their paper, some artifacts were produced, which might affect the accurate colon reconstruction. In our paper, instead of labeling each voxel with a unique label or tissue type, the percentage of different tissues within each voxel, which we call a mixture, was considered in establishing a maximum a posterior probability (MAP) image-segmentation framework. A Markov random field (MRF) model was developed to reflect the spatial information for the tissue mixtures. The spatial information based on hard segmentation was used to determine which tissue types are in the specific voxel. Parameters of each tissue class were estimated by the expectation-maximization (EM) algorithm during the MAP tissue-mixture segmentation. Real CT experimental results demonstrated that the partial volume effects between four tissue types have been precisely detected. Meanwhile, the residue has been electronically removed and very smooth and clean interface along the colon wall has been obtained.

  11. Ortholog detection using the reciprocal smallest distance algorithm.

    Science.gov (United States)

    Wall, Dennis P; Deluca, Todd

    2007-01-01

    All protein coding genes have a phylogenetic history that when understood can lead to deep insights into the diversification or conservation of function, the evolution of developmental complexity, and the molecular basis of disease. One important part to reconstructing the relationships among genes in different organisms is an accurate method to find orthologs as well as an accurate measure of evolutionary diversification. The present chapter details such a method, called the reciprocal smallest distance algorithm (RSD). This approach improves upon the common procedure of taking reciprocal best Basic Local Alignment Search Tool hits (RBH) in the identification of orthologs by using global sequence alignment and maximum likelihood estimation of evolutionary distances to detect orthologs between two genomes. RSD finds many putative orthologs missed by RBH because it is less likely to be misled by the presence of close paralogs in genomes. The package offers a tremendous amount of flexibility in investigating parameter settings allowing the user to search for increasingly distant orthologs between highly divergent species, among other advantages. The flexibility of this tool makes it a unique and powerful addition to other available approaches for ortholog detection.

  12. ASSESSMENT OF RELIABILITY AND COMPARISON OF TWO ALGORITHMS FOR HAIL HAZARD DETECTION FROM AIRCRAFT

    Directory of Open Access Journals (Sweden)

    I.M. Braun

    2005-02-01

    Full Text Available  This paper presents and analyzes two algorithms for the detection of hail zones in clouds and precipitation: parametric algorithm and adaptive non-parametric algorithm. Reliability of detection of radar signals from hailstones is investigated by statistical simulation with application of experimental researches as initial data. The results demonstrate the limits of both algorithms as well as higher viability of non-parametric algorithm. Polarimetric algorithms are useful for the implementation in ground-based and airborne weather radars.

  13. SURF IA Conflict Detection and Resolution Algorithm Evaluation

    Science.gov (United States)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Barker, Glover D.

    2012-01-01

    The Enhanced Traffic Situational Awareness on the Airport Surface with Indications and Alerts (SURF IA) algorithm was evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. SURF IA is designed to increase flight crew situation awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the SURF IA algorithm under various runway scenarios, multiple levels of conflict detection and resolution (CD&R) system equipage, and various levels of horizontal position accuracy. This paper gives an overview of the SURF IA concept, simulation study, and results. Runway incursions are a serious aviation safety hazard. As such, the FAA is committed to reducing the severity, number, and rate of runway incursions by implementing a combination of guidance, education, outreach, training, technology, infrastructure, and risk identification and mitigation initiatives [1]. Progress has been made in reducing the number of serious incursions - from a high of 67 in Fiscal Year (FY) 2000 to 6 in FY2010. However, the rate of all incursions has risen steadily over recent years - from a rate of 12.3 incursions per million operations in FY2005 to a rate of 18.9 incursions per million operations in FY2010 [1, 2]. The National Transportation Safety Board (NTSB) also considers runway incursions to be a serious aviation safety hazard, listing runway incursion prevention as one of their most wanted transportation safety improvements [3]. The NTSB recommends that immediate warning of probable collisions/incursions be given directly to flight crews in the cockpit [4].

  14. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    Science.gov (United States)

    2018-01-01

    ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a

  15. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure

    Science.gov (United States)

    2018-01-01

    ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter... Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure by Kwok F Tom Sensors and Electron Devices...September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure 5a

  16. THE ALGORITHM FOR THE AUTOMATIC DETECTION OF THE WHISTLERS IN THE REAL-TIME MODE

    Directory of Open Access Journals (Sweden)

    E.A. Malysh

    2015-12-01

    Full Text Available This is the description of the whistlers automatic detection algorithm, based on the nonlinear transformation of the spectrogram VLF signal. In the converted spectrogram the whistler graphic is presented by a straight line, detection of which is algorithmically simple task. The testing of the program implementation of the algorithm showed that a detection can be managed in the real-time mode.

  17. Scalable Distributed Change Detection from Astronomy Data Streams using Local, Asynchronous Eigen Monitoring Algorithms

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper considers the problem of change detection using local distributed eigen monitoring algorithms for next generation of astronomy petascale data pipelines...

  18. An Efficient Hierarchy Algorithm for Community Detection in Complex Networks

    Directory of Open Access Journals (Sweden)

    Lili Zhang

    2014-01-01

    Full Text Available Community structure is one of the most fundamental and important topology characteristics of complex networks. The research on community structure has wide applications and is very important for analyzing the topology structure, understanding the functions, finding the hidden properties, and forecasting the time-varying of the networks. This paper analyzes some related algorithms and proposes a new algorithm—CN agglomerative algorithm based on graph theory and the local connectedness of network to find communities in network. We show this algorithm is distributed and polynomial; meanwhile the simulations show it is accurate and fine-grained. Furthermore, we modify this algorithm to get one modified CN algorithm and apply it to dynamic complex networks, and the simulations also verify that the modified CN algorithm has high accuracy too.

  19. Automated drusen detection in retinal images using analytical modelling algorithms

    Directory of Open Access Journals (Sweden)

    Manivannan Ayyakkannu

    2011-07-01

    Full Text Available Abstract Background Drusen are common features in the ageing macula associated with exudative Age-Related Macular Degeneration (ARMD. They are visible in retinal images and their quantitative analysis is important in the follow up of the ARMD. However, their evaluation is fastidious and difficult to reproduce when performed manually. Methods This article proposes a methodology for Automatic Drusen Deposits Detection and quantification in Retinal Images (AD3RI by using digital image processing techniques. It includes an image pre-processing method to correct the uneven illumination and to normalize the intensity contrast with smoothing splines. The drusen detection uses a gradient based segmentation algorithm that isolates drusen and provides basic drusen characterization to the modelling stage. The detected drusen are then fitted by Modified Gaussian functions, producing a model of the image that is used to evaluate the affected area. Twenty two images were graded by eight experts, with the aid of a custom made software and compared with AD3RI. This comparison was based both on the total area and on the pixel-to-pixel analysis. The coefficient of variation, the intraclass correlation coefficient, the sensitivity, the specificity and the kappa coefficient were calculated. Results The ground truth used in this study was the experts' average grading. In order to evaluate the proposed methodology three indicators were defined: AD3RI compared to the ground truth (A2G; each expert compared to the other experts (E2E and a standard Global Threshold method compared to the ground truth (T2G. The results obtained for the three indicators, A2G, E2E and T2G, were: coefficient of variation 28.8 %, 22.5 % and 41.1 %, intraclass correlation coefficient 0.92, 0.88 and 0.67, sensitivity 0.68, 0.67 and 0.74, specificity 0.96, 0.97 and 0.94, and kappa coefficient 0.58, 0.60 and 0.49, respectively. Conclusions The gradings produced by AD3RI obtained an agreement

  20. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry

    2005-01-01

    Reliable detection of non-traversable hazards is a key requirement for off-road autonomous navigation. A detailed description of each obstacle detection algorithm and their performance on the surveyed obstacle course is presented in this paper.

  1. The Apriori Stochastic Dependency Detection (ASDD) algorithm for learning Stochastic logic rules

    OpenAIRE

    Child, C. H. T.; Stathis, K.

    2005-01-01

    Apriori Stochastic Dependency Detection (ASDD) is an algorithm for fast induction of stochastic logic rules from a database of observations made by an agent situated in an environment. ASDD is based on features of the Apriori algorithm for mining association rules in large databases of sales transactions [1] and the MSDD algorithm for discovering stochastic dependencies in multiple streams of data [15]. Once these rules have been acquired the Precedence algorithm assigns operator precedence w...

  2. Hardware Implementation of a Modified Delay-Coordinate Mapping-Based QRS Complex Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Andrej Zemva

    2007-01-01

    Full Text Available We present a modified delay-coordinate mapping-based QRS complex detection algorithm, suitable for hardware implementation. In the original algorithm, the phase-space portrait of an electrocardiogram signal is reconstructed in a two-dimensional plane using the method of delays. Geometrical properties of the obtained phase-space portrait are exploited for QRS complex detection. In our solution, a bandpass filter is used for ECG signal prefiltering and an improved method for detection threshold-level calculation is utilized. We developed the algorithm on the MIT-BIH Arrhythmia Database (sensitivity of 99.82% and positive predictivity of 99.82% and tested it on the long-term ST database (sensitivity of 99.72% and positive predictivity of 99.37%. Our algorithm outperforms several well-known QRS complex detection algorithms, including the original algorithm.

  3. Two Algorithms for the Detection and Tracking of Moving Vehicle Targets in Aerial Infrared Image Sequences

    Directory of Open Access Journals (Sweden)

    Yutian Cao

    2015-12-01

    Full Text Available In this paper, by analyzing the characteristics of infrared moving targets, a Symmetric Frame Differencing Target Detection algorithm based on local clustering segmentation is proposed. In consideration of the high real-time performance and accuracy of traditional symmetric differencing, this novel algorithm uses local grayscale clustering to accomplish target detection after carrying out symmetric frame differencing to locate the regions of change. In addition, the mean shift tracking algorithm is also improved to solve the problem of missed targets caused by error convergence. As a result, a kernel-based mean shift target tracking algorithm based on detection updates is also proposed. This tracking algorithm makes use of the interaction between detection and tracking to correct the tracking errors in real time and to realize robust target tracking in complex scenes. In addition, the validity, robustness and stability of the proposed algorithms are all verified by experiments on mid-infrared aerial sequences with vehicles as targets.

  4. Target detection using the background model from the topological anomaly detection algorithm

    Science.gov (United States)

    Dorado Munoz, Leidy P.; Messinger, David W.; Ziemann, Amanda K.

    2013-05-01

    The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.

  5. Fast algorithm for probabilistic bone edge detection (FAPBED)

    Science.gov (United States)

    Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.

    2005-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean

  6. Spectroscopic observations of the RS CVn-type binary systems SV Cam and XY UMa

    Science.gov (United States)

    Rainger, P. P.; Hilditch, R. W.; Edwin, R. P.

    1991-01-01

    Radial velocities of the primary components of the two RS CVn-type binary systems SV Cam and XY UMa are presented, for the first time for XY UMa. Neither secondary component could be detected. A change of 5.0 + or - 13 km/sec in the systemic velocity of SV Cam is found over 40 years, which lends some support to the current model of SV Cam being a triple system. If the masses of the G3 V primary components of both systems are assumed to be 1 solar mass, then the secondaries are 0.7 (SV Cam) and 0.6 (XY UMa) solar masses; all four stars are main sequence objects with SV Cam being rather more evolved than XY UMa.

  7. Spectroscopic observations of the RS CVn-type binary systems SV Cam and XY UMa

    Energy Technology Data Exchange (ETDEWEB)

    Rainger, P.P.; Hilditch, R.W.; Edwin, R.P. (Saint Andrews Univ. (UK). Observatory)

    1991-01-01

    Radial velocities of the primary components of the two RS CVn-type binary systems SV Cam and XY UMa are presented, for the first time for XY UMa. Neither secondary component could be detected. A change of 5.0+-1.3km s{sup -1} in the systemic velocity of SV Cam is found over 40 years which lends some support to the current model of SV Cam being a triple system. If the masses of the G3 V primary components of both systems are assumed to be one solar mass, then the secondaries are 0.7 (SV CAM) and 0.6 (XY UMa) solar masses; all four stars are main sequence objects with SV Cam being rather more evolved than XY UMa. (author).

  8. Inappropriate Detection of a Supraventricular Tachycardia as Dual Tachycardia by the PR Logic™ Algorithm

    Directory of Open Access Journals (Sweden)

    Ajit Thachil, MD, DM, CCDS

    2014-05-01

    Full Text Available Tachycardia detection and therapy algorithms in Implantable Cardioverter-Defibrillators (ICD reduce, but do not eliminate inappropriate ICD shocks. Awareness of the pros and cons of a particular algorithm helps to predict its utility in specific situations. We report a case where PR logic™, an algorithm commonly used in currently implanted ICDs to differentiate supraventricular tachycardia (SVT from ventricular tachycardia resulted in inappropriate detection and shock for an SVT, and discuss several solutions to the problem.

  9. Parallelization of exoplanets detection algorithms based on field rotation; example of the MOODS algorithm for SPHERE

    Science.gov (United States)

    Mattei, D.; Smith, I.; Ferrari, A.; Carbillet, M.

    2010-10-01

    Post-processing for exoplanet detection using direct imaging requires large data cubes and/or sophisticated signal processing technics. For alt-azimuthal mounts, a projection effect called field rotation makes the potential planet rotate in a known manner on the set of images. For ground based telescopes that use extreme adaptive optics and advanced coronagraphy, technics based on field rotation are already broadly used and still under progress. In most such technics, for a given initial position of the planet the planet intensity estimate is a linear function of the set of images. However, due to field rotation the modified instrumental response applied is not shift invariant like usual linear filters. Testing all possible initial positions is therefore very time-consuming. To reduce the time process, we propose to deal with each subset of initial positions computed on a different machine using parallelization programming. In particular, the MOODS algorithm dedicated to the VLT-SPHERE instrument, that estimates jointly the light contributions of the star and the potential exoplanet, is parallelized on the Observatoire de la Cote d'Azur cluster. Different parallelization methods (OpenMP, MPI, Jobs Array) have been elaborated for the initial MOODS code and compared to each other. The one finally chosen splits the initial positions on the processors available by accounting at best for the different constraints of the cluster structure: memory, job submission queues, number of available CPUs, cluster average load. At the end, a standard set of images is satisfactorily processed in a few hours instead of a few days.

  10. Rocketdyne Safety Algorithm: Space Shuttle Main Engine Fault Detection

    Science.gov (United States)

    Norman, Arnold M., Jr.

    1994-01-01

    The Rocketdyne Safety Algorithm (RSA) has been developed to the point of use on the TTBE at MSFC on Task 4 of LeRC contract NAS3-25884. This document contains a description of the work performed, the results of the nominal test of the major anomaly test cases and a table of the resulting cutoff times, a plot of the RSA value vs. time for each anomaly case, a logic flow description of the algorithm, the algorithm code, and a development plan for future efforts.

  11. Genetic algorithm for flood detection and evacuation route planning

    Science.gov (United States)

    Gomes, Rahul; Straub, Jeremy

    2017-05-01

    A genetic-type algorithm is presented that uses satellite geospatial data to determine the most probable path to safety for individuals in a disaster area, where a traditional routing system cannot be used. The algorithm uses geological features and disaster information to determine the shortest safe path. It predicts how a flood can change a landform over time and uses this data to predict alternate routes. It also predicts safe routes in rural locations where GPS/map-based routing data is unavailable or inaccurate. Reflectance and a supervised classification algorithm are used and the output is compared with RFPI and PCR-GLOBWB data.

  12. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2013-01-01

    Full Text Available A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI. The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding.

  13. Algorithms

    Indian Academy of Sciences (India)

    positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.

  14. Automated phylogenetic detection of recombination using a genetic algorithm

    National Research Council Canada - National Science Library

    Kosakovsky Pond, Sergei L; Posada, David; Gravenor, Michael B; Woelk, Christopher H; Frost, Simon D W

    2006-01-01

    .... We propose a model-based framework that uses a genetic algorithm to search a multiple-sequence alignment for putative recombination break points, quantifies the level of support for their locations...

  15. A Zero Velocity Detection Algorithm Using Inertial Sensors for Pedestrian Navigation Systems

    Directory of Open Access Journals (Sweden)

    Young Soo Suh

    2010-10-01

    Full Text Available In pedestrian navigation systems, the position of a pedestrian is computed using an inertial navigation algorithm. In the algorithm, the zero velocity updating plays an important role, where zero velocity intervals are detected and the velocity error is reset. To use the zero velocity updating, it is necessary to detect zero velocity intervals reliably. A new zero detection algorithm is proposed in the paper, where only one gyroscope value is used. A Markov model is constructed using segmentation of gyroscope outputs instead of using gyroscope outputs directly, which makes the zero velocity detection more reliable.

  16. A general-purpose contact detection algorithm for nonlinear structural analysis codes

    Energy Technology Data Exchange (ETDEWEB)

    Heinstein, M.W.; Attaway, S.W.; Swegle, J.W.; Mello, F.J.

    1993-05-01

    A new contact detection algorithm has been developed to address difficulties associated with the numerical simulation of contact in nonlinear finite element structural analysis codes. Problems including accurate and efficient detection of contact for self-contacting surfaces, tearing and eroding surfaces, and multi-body impact are addressed. The proposed algorithm is portable between dynamic and quasi-static codes and can efficiently model contact between a variety of finite element types including shells, bricks, beams and particles. The algorithm is composed of (1) a location strategy that uses a global search to decide which slave nodes are in proximity to a master surface and (2) an accurate detailed contact check that uses the projected motions of both master surface and slave node. In this report, currently used contact detection algorithms and their associated difficulties are discussed. Then the proposed algorithm and how it addresses these problems is described. Finally, the capability of the new algorithm is illustrated with several example problems.

  17. A Modified Energy Detection Based Spectrum Sensing Algorithm for Green Cognitive Radio Communication

    Directory of Open Access Journals (Sweden)

    Sidra Rajput

    2015-10-01

    Full Text Available Spectrum Sensing is the first and fundamental function of Cognitive Cycle which plays a vital role in the success of CRs (Cognitive Radios. Spectrum Sensing indicate the presence and absence of PUs (Primary Users in RF (Radio Frequency spectrum occupancy measurements. In order to correctly determine the presence and absence of Primary Users, the algorithms in practice include complex mathematics which increases the computational complexity of the algorithm, thus shifted the CRs to operate as ?green? communication systems. In this paper, an energy efficient and computationally less complex, energy detection based Spectrum Sensing algorithm have been proposed. The design goals of the proposed algorithm are to save the processing and sensing energies. At first, by using less MAC (Multiply and Accumulate operation, it saves the processing energy needed to determine the presence and absence of PUs. Secondly, it saves the sensing energy by providing a way to find lowest possible sensing time at which spectrum is to be sensed. Two scenarios have been defined for testing the proposed algorithm i.e. simulate detection capability of Primary Users in ideal and noisy scenarios. Detection of PUs in both of these scenarios have been compared to obtain the probability of detection. Energy Efficiency of the proposed algorithm has been proved by making performance comparison between the proposed (less complex algorithm and the legacy energy detection algorithm. With reduced complexity, the proposed spectrum sensing algorithm can be considered under the paradigm of Green Cognitive Radio Communication

  18. Optimized time alignment algorithm for LC-MS data: Correlation optimized warping using component detection algorithm-selected mass chromatograms

    NARCIS (Netherlands)

    Christin, C.; Smilde, A.K.; Hoefsloot, H.C.J.; Suits, F.; Bischoff, R.; Horvatovich, P.L.

    2008-01-01

    Correlation optimized warping (COW) based on the total ion current (TIC) is a widely used time alignment algorithm (COW-TIC). This approach works successfully on chromatograms containing few compounds and having a well-defined TIC. In this paper, we have combined COW with a component detection

  19. Optimized time alignment algorithm for LC-MS data : Correlation optimized warping using component detection algorithm-selected mass chromatograms

    NARCIS (Netherlands)

    Christin, Christin; Smilde, Age K.; Hoefsloot, Huub C. J.; Suits, Frank; Bischoff, Rainer; Horvatovich, Peter L.

    2008-01-01

    Correlation optimized warping (COW) based on the total ion current (TIC) is a widely used time alignment algorithm (COW-TIC). This approach works successfully on chromatograms containing few compounds and having a well-defined TIC. In this paper, we have combined COW with a component detection

  20. From Pixels to Region: A Salient Region Detection Algorithm for Location-Quantification Image

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available Image saliency detection has become increasingly important with the development of intelligent identification and machine vision technology. This process is essential for many image processing algorithms such as image retrieval, image segmentation, image recognition, and adaptive image compression. We propose a salient region detection algorithm for full-resolution images. This algorithm analyzes the randomness and correlation of image pixels and pixel-to-region saliency computation mechanism. The algorithm first obtains points with more saliency probability by using the improved smallest univalue segment assimilating nucleus operator. It then reconstructs the entire saliency region detection by taking these points as reference and combining them with image spatial color distribution, as well as regional and global contrasts. The results for subjective and objective image saliency detection show that the proposed algorithm exhibits outstanding performance in terms of technology indices such as precision and recall rates.

  1. An Improved DBSCAN Algorithm to Detect Stops in Individual Trajectories

    Directory of Open Access Journals (Sweden)

    Ting Luo

    2017-02-01

    Full Text Available With the increasing use of mobile GPS (global positioning system devices, a large volume of trajectory data on users can be produced. In most existing work, trajectories are usually divided into a set of stops and moves. In trajectories, stops represent the most important and meaningful part of the trajectory; there are many data mining methods to extract these locations. DBSCAN (density-based spatial clustering of applications with noise is a classical density-based algorithm used to find the high-density areas in space, and different derivative methods of this algorithm have been proposed to find the stops in trajectories. However, most of these methods required a manually-set threshold, such as the speed threshold, for each feature variable. In our research, we first defined our new concept of move ability. Second, by introducing the theory of data fields and by taking our new concept of move ability into consideration, we constructed a new, comprehensive, hybrid feature–based, density measurement method which considers temporal and spatial properties. Finally, an improved DBSCAN algorithm was proposed using our new density measurement method. In the Experimental Section, the effectiveness and efficiency of our method is validated against real datasets. When comparing our algorithm with the classical density-based clustering algorithms, our experimental results show the efficiency of the proposed method.

  2. An Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision

    Science.gov (United States)

    2018-01-01

    ARL-TR-8269 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision...needed. Do not return it to the originator. ARL-TR-8269 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection...Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  3. Insider Threat Control: Using Plagiarism Detection Algorithms to Prevent Data Exfiltration in Near Real Time

    Science.gov (United States)

    2013-10-01

    2. REPORT DATE October 2013 3. REPORT TYPE AND DATES COVERED Final 4. TITLE AND SUBTITLE Insider Threat Control: Using Plagiarism Detection ...Insider Threat Control: Using Plagiarism Detection Algorithms to Prevent Data Exfiltration in Near Real Time Todd Lewellen George J. Silowash...algorithms used in plagiarism detection software—to search the index for bodies of text similar to the text found in the outgoing web request. If the

  4. A maximal clique based multiobjective evolutionary algorithm for overlapping community detection

    OpenAIRE

    Wen, Xuyun; Chen, Wei-Neng; Lin, Ying; Gu, Tianlong; Zhang, Huaxiang; Li, Yun; Yin, Yilong; Zhang, Jun

    2016-01-01

    Detecting community structure has become one im-portant technique for studying complex networks. Although many community detection algorithms have been proposed, most of them focus on separated communities, where each node can be-long to only one community. However, in many real-world net-works, communities are often overlapped with each other. De-veloping overlapping community detection algorithms thus be-comes necessary. Along this avenue, this paper proposes a maxi-mal clique based multiob...

  5. Sensor failure detection and isolation in flexible structures using the eigensystem realization algorithm

    Science.gov (United States)

    Zimmerman, David C.; Lyde, Terri L.

    Sensor failure detection and isolation (FDI) for flexible structures is approached from a system realization perspective. Instead of using hardware or analytical model redundancy, system realization is utilized to provide an experimental model based redundancy. The FDI algorithm utilizes the eigensystem realization algorithm to determine a minimum-order state space realization of the structure in the presence of noisy measurements. The FDI algorithm utilizes statistical comparisons of successive realizations to detect and isolate the failed sensor component. Due to the nature in which the FDI algorithm is formulated, it is also possible to classify the failure mode of the sensor. Results are presented using both numerically simulated and actual experimental data.

  6. Columnar structure of SV40 minichromosome

    Directory of Open Access Journals (Sweden)

    Edward N Trifonov

    2015-07-01

    Full Text Available Like the sequence of the strongest 601 clone nucleosome of Lowary and Widom, the SV40 genome sequence contains tracks of YR dinucleotides separated by small integers of the 10.4n base series (10, 11, 21 and 30 bases. The tracks, however, substantially exceed the nucleosome DNA size and, thus, correspond to more extended structure - columnar chromatin. The micrococcal nuclease digests of the SV40 chromatin do not show uniquely positioned individual nucleosomes. This confirms the columnar structure of the minichromosome, as well as earlier electron microscopy studies.

  7. A New Lightweight Watchdog-Based Algorithm for Detecting Sybil Nodes in Mobile WSNs

    Directory of Open Access Journals (Sweden)

    Rezvan Almas Shehni

    2017-12-01

    Full Text Available Wide-spread deployment of Wireless Sensor Networks (WSN necessitates special attention to security issues, amongst which Sybil attacks are the most important ones. As a core to Sybil attacks, malicious nodes try to disrupt network operations by creating several fabricated IDs. Due to energy consumption concerns in WSNs, devising detection algorithms which release the sensor nodes from high computational and communicational loads are of great importance. In this paper, a new computationally lightweight watchdog-based algorithm is proposed for detecting Sybil IDs in mobile WSNs. The proposed algorithm employs watchdog nodes for collecting detection information and a designated watchdog node for detection information processing and the final Sybil list generation. Benefiting from a newly devised co-presence state diagram and adequate detection rules, the new algorithm features low extra communication overhead, as well as a satisfactory compromise between two otherwise contradictory detection measures of performance, True Detection Rate (TDR and False Detection Rate (FDR. Extensive simulation results illustrate the merits of the new algorithm compared to a couple of recent watchdog-based Sybil detection algorithms.

  8. Algorithms

    Indian Academy of Sciences (India)

    In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...

  9. Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees

    Science.gov (United States)

    Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.

    2017-05-01

    Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.

  10. Detection of wood failure by image processing method: influence of algorithm, adhesive and wood species

    Science.gov (United States)

    Lanying Lin; Sheng He; Feng Fu; Xiping Wang

    2015-01-01

    Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...

  11. Multi-pattern string matching algorithms comparison for intrusion detection system

    Science.gov (United States)

    Hasan, Awsan A.; Rashid, Nur'Aini Abdul; Abdulrazzaq, Atheer A.

    2014-12-01

    Computer networks are developing exponentially and running at high speeds. With the increasing number of Internet users, computers have become the preferred target for complex attacks that require complex analyses to be detected. The Intrusion detection system (IDS) is created and turned into an important part of any modern network to protect the network from attacks. The IDS relies on string matching algorithms to identify network attacks, but these string matching algorithms consume a considerable amount of IDS processing time, thereby slows down the IDS performance. A new algorithm that can overcome the weakness of the IDS needs to be developed. Improving the multi-pattern matching algorithm ensure that an IDS can work properly and the limitations can be overcome. In this paper, we perform a comparison between our three multi-pattern matching algorithms; MP-KR, MPHQS and MPH-BMH with their corresponding original algorithms Kr, QS and BMH respectively. The experiments show that MPH-QS performs best among the proposed algorithms, followed by MPH-BMH, and MP-KR is the slowest. MPH-QS detects a large number of signature patterns in short time compared to other two algorithms. This finding can prove that the multi-pattern matching algorithms are more efficient in high-speed networks.

  12. [A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].

    Science.gov (United States)

    Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo

    2015-10-01

    With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency.

  13. Automatic QRS complex detection algorithm designed for a novel wearable, wireless electrocardiogram recording device

    DEFF Research Database (Denmark)

    Saadi, Dorthe Bodholt; Egstrup, Kenneth; Branebjerg, Jens

    2012-01-01

    We have designed and optimized an automatic QRS complex detection algorithm for electrocardiogram (ECG) signals recorded with the DELTA ePatch platform. The algorithm is able to automatically switch between single-channel and multi-channel analysis mode. This preliminary study includes data from ...

  14. An optimized outlier detection algorithm for jury-based grading of engineering design projects

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Espensen, Christina; Clemmensen, Line Katrine Harder

    2016-01-01

    This work characterizes and optimizes an outlier detection algorithm to identify potentially invalid scores produced by jury members while grading engineering design projects. The paper describes the original algorithm and the associated adjudication process in detail. The impact of the various...

  15. A novel adaptive, real-time algorithm to detect gait events from wearable sensors.

    Science.gov (United States)

    Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona

    2015-05-01

    A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices.

  16. A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test

    Science.gov (United States)

    Becker, D.; Cain, S.

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.

  17. Comparison of fractal dimension estimation algorithms for epileptic seizure onset detection

    Science.gov (United States)

    Polychronaki, G. E.; Ktonas, P. Y.; Gatzonis, S.; Siatouni, A.; Asvestas, P. A.; Tsekou, H.; Sakas, D.; Nikita, K. S.

    2010-08-01

    Fractal dimension (FD) is a natural measure of the irregularity of a curve. In this study the performances of three waveform FD estimation algorithms (i.e. Katz's, Higuchi's and the k-nearest neighbour (k-NN) algorithm) were compared in terms of their ability to detect the onset of epileptic seizures in scalp electroencephalogram (EEG). The selection of parameters involved in FD estimation, evaluation of the accuracy of the different algorithms and assessment of their robustness in the presence of noise were performed based on synthetic signals of known FD. When applied to scalp EEG data, Katz's and Higuchi's algorithms were found to be incapable of producing consistent changes of a single type (either a drop or an increase) during seizures. On the other hand, the k-NN algorithm produced a drop, starting close to the seizure onset, in most seizures of all patients. The k-NN algorithm outperformed both Katz's and Higuchi's algorithms in terms of robustness in the presence of noise and seizure onset detection ability. The seizure detection methodology, based on the k-NN algorithm, yielded in the training data set a sensitivity of 100% with 10.10 s mean detection delay and a false positive rate of 0.27 h-1, while the corresponding values in the testing data set were 100%, 8.82 s and 0.42 h-1, respectively. The above detection results compare favourably to those of other seizure onset detection methodologies applied to scalp EEG in the literature. The methodology described, based on the k-NN algorithm, appears to be promising for the detection of the onset of epileptic seizures based on scalp EEG.

  18. A New Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Complex Networks

    Directory of Open Access Journals (Sweden)

    Guoqiang Chen

    2013-01-01

    Full Text Available Community detection in dynamic networks is an important research topic and has received an enormous amount of attention in recent years. Modularity is selected as a measure to quantify the quality of the community partition in previous detection methods. But, the modularity has been exposed to resolution limits. In this paper, we propose a novel multiobjective evolutionary algorithm for dynamic networks community detection based on the framework of nondominated sorting genetic algorithm. Modularity density which can address the limitations of modularity function is adopted to measure the snapshot cost, and normalized mutual information is selected to measure temporal cost, respectively. The characteristics knowledge of the problem is used in designing the genetic operators. Furthermore, a local search operator was designed, which can improve the effectiveness and efficiency of community detection. Experimental studies based on synthetic datasets show that the proposed algorithm can obtain better performance than the compared algorithms.

  19. Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2016-01-01

    Full Text Available Night vision systems get more and more attention in the field of automotive active safety field. In this area, a number of researchers have proposed far-infrared sensor based night-time vehicle detection algorithm. However, existing algorithms have low performance in some indicators such as the detection rate and processing time. To solve this problem, we propose a far-infrared image vehicle detection algorithm based on visual saliency and deep learning. Firstly, most of the nonvehicle pixels will be removed with visual saliency computation. Then, vehicle candidate will be generated by using prior information such as camera parameters and vehicle size. Finally, classifier trained with deep belief networks will be applied to verify the candidates generated in last step. The proposed algorithm is tested in around 6000 images and achieves detection rate of 92.3% and processing time of 25 Hz which is better than existing methods.

  20. A hierarchical lazy smoking detection algorithm using smartwatch sensors

    NARCIS (Netherlands)

    Shoaib, M.; Scholten, Johan; Havinga, Paul J.M.; Durmaz, O.

    2016-01-01

    Smoking is known to be one of the main causes for premature deaths. A reliable smoking detection method can enable applications for an insight into a user’s smoking behaviour and for use in smoking cessation programs. However, it is difficult to accurately detect smoking because it can be performed

  1. An improved data clustering algorithm for outlier detection

    Directory of Open Access Journals (Sweden)

    Anant Agarwal

    2016-12-01

    Full Text Available Data mining is the extraction of hidden predictive information from large databases. This is a technology with potential to study and analyze useful information present in data. Data objects which do not usually fit into the general behavior of the data are termed as outliers. Outlier Detection in databases has numerous applications such as fraud detection, customized marketing, and the search for terrorism. By definition, outliers are rare occurrences and hence represent a small portion of the data. However, the use of Outlier Detection for various purposes is not an easy task. This research proposes a modified PAM for detecting outliers. The proposed technique has been implemented in JAVA. The results produced by the proposed technique are found better than existing technique in terms of outliers detected and time complexity.

  2. An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques

    Science.gov (United States)

    2018-01-09

    ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological and...is no longer needed. Do not return it to the originator. ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy ...4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques 5a. CONTRACT NUMBER

  3. A Robust Automated Cataract Detection Algorithm Using Diagnostic Opinion Based Parameter Thresholding for Telemedicine Application

    OpenAIRE

    Shashwat Pathak; Basant Kumar

    2016-01-01

    This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR) camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true ...

  4. Fuzzy Kernel k-Medoids algorithm for anomaly detection problems

    Science.gov (United States)

    Rustam, Z.; Talita, A. S.

    2017-07-01

    Intrusion Detection System (IDS) is an essential part of security systems to strengthen the security of information systems. IDS can be used to detect the abuse by intruders who try to get into the network system in order to access and utilize the available data sources in the system. There are two approaches of IDS, Misuse Detection and Anomaly Detection (behavior-based intrusion detection). Fuzzy clustering-based methods have been widely used to solve Anomaly Detection problems. Other than using fuzzy membership concept to determine the object to a cluster, other approaches as in combining fuzzy and possibilistic membership or feature-weighted based methods are also used. We propose Fuzzy Kernel k-Medoids that combining fuzzy and possibilistic membership as a powerful method to solve anomaly detection problem since on numerical experiment it is able to classify IDS benchmark data into five different classes simultaneously. We classify IDS benchmark data KDDCup'99 data set into five different classes simultaneously with the best performance was achieved by using 30 % of training data with clustering accuracy reached 90.28 percent.

  5. Building test data from real outbreaks for evaluating detection algorithms.

    Directory of Open Access Journals (Sweden)

    Gaetan Texier

    Full Text Available Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler. We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1 resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak

  6. Automatic QRS complex detection algorithm designed for a novel wearable, wireless electrocardiogram recording device.

    Science.gov (United States)

    Nielsena, Dorthe B; Egstrup, Kenneth; Branebjerg, Jens; Andersen, Gunnar B; Sorensen, Helge B D

    2012-01-01

    We have designed and optimized an automatic QRS complex detection algorithm for electrocardiogram (ECG) signals recorded with the DELTA ePatch platform. The algorithm is able to automatically switch between single-channel and multi-channel analysis mode. This preliminary study includes data from 11 patients measured with the DELTA ePatch platform and the algorithm achieves an average QRS sensitivity and positive predictivity of 99.57% and 99.57%, respectively. The algorithm was also evaluated on all 48 records from the MIT-BIH Arrhythmia Database (MITDB) with an average sensitivity and positive predictivity of 99.63% and 99.63%, respectively.

  7. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    Science.gov (United States)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  8. The research of moving objects behavior detection and tracking algorithm in aerial video

    Science.gov (United States)

    Yang, Le-le; Li, Xin; Yang, Xiao-ping; Li, Dong-hui

    2015-12-01

    The article focuses on the research of moving target detection and tracking algorithm in Aerial monitoring. Study includes moving target detection, moving target behavioral analysis and Target Auto tracking. In moving target detection, the paper considering the characteristics of background subtraction and frame difference method, using background reconstruction method to accurately locate moving targets; in the analysis of the behavior of the moving object, using matlab technique shown in the binary image detection area, analyzing whether the moving objects invasion and invasion direction; In Auto Tracking moving target, A video tracking algorithm that used the prediction of object centroids based on Kalman filtering was proposed.

  9. Evaluation of Face Detection Algorithms for the Bank Client Identity Verification

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2017-06-01

    Full Text Available Results of investigation of face detection algorithms efficiency in the banking client visual verification system are presented. The video recordings were made in real conditions met in three bank operating outlets employing a miniature industrial USB camera. The aim of the experiments was to check the practical usability of the face detection method in the biometric bank client verification system. The main assumption was to provide a simplified as much as possible user interaction with the application. Applied algorithms for face detection are described and achieved results of face detection in the real bank environment conditions are presented. Practical limitations of the application based on encountered problems are discussed.

  10. Change Detection Algorithms for Information Assurance of Computer Networks

    National Research Council Canada - National Science Library

    Cardenas, Alvaro A

    2002-01-01

    .... In this thesis, the author will focus on the detection of three attack scenarios: the spreading of active worms throughout the Internet, distributed denial of service attacks, and routing attacks to wireless ad hoc networks...

  11. Algorithms for Speeding up Distance-Based Outlier Detection

    Data.gov (United States)

    National Aeronautics and Space Administration — The problem of distance-based outlier detection is difficult to solve efficiently in very large datasets because of potential quadratic time complexity. We address...

  12. Novootkriveni autograf Filipa de Diversija iz 1455. godine: poslanice sv. Jeronima, sv. Augustina i drugih

    OpenAIRE

    Janeković-Römer, Zdenka

    2010-01-01

    Novotkriveni autograf Filipa de Diversi, kodeks u koji je prepisivao poslanice Sv. Jeronima, Sv. Augustina i drugih, donosi važne nove biografske i bibliografske podatke o njemu. Saznajemo da je 1455. godine još bio živ i boravio u Veneciji. Obiman kodeks (278 folija) sačuvan je u knjižnici earla od Leicestera, Holkham Hall u Engleskoj. Članak donosi popis svih poslanica prepisanih u kodeks, s bibliografskim podacima.

  13. Planet Detection Algorithms for the Terrestrial Planet Finder-C

    Science.gov (United States)

    Kasdin, N. J.; Braems, I.

    2005-12-01

    Critical to mission planning for the terrestrial planet finder coronagraph (TPF-C) is the ability to estimate integration times for planet detection. This detection is complicated by the presence of background noise due to local and exo-zodiacal dust, by residual speckle due optical errors, and by the dependence of the PSF shape on the specific coronagraph. In this paper we examine in detail the use of PSF fitting (matched filtering) for planet detection, derive probabilistic bounds for the signal-to-noise ratio by balancing missed detection and false alarm rates, and demonstrate that this is close to the optimal linear detection technique. We then compare to a Bayesian detection approach and show that for very low background the Bayesian method offers integration time improvements, but rapidly approaches the PSF fitting result for reasonable levels of background noise. We confirm via monte-carlo simulations. This work was supported under a grant from the Jet Propulsion Laboratory and by a fellowship from the Institut National de Recherche en Informatique et Automatique (INRIA).

  14. A low-power fall detection algorithm based on triaxial acceleration and barometric pressure.

    Science.gov (United States)

    Wang, Changhong; Narayanan, Michael R; Lord, Stephen R; Redmond, Stephen J; Lovell, Nigel H

    2014-01-01

    This paper proposes a low-power fall detection algorithm based on triaxial accelerometry and barometric pressure signals. The algorithm dynamically adjusts the sampling rate of an accelerometer and manages data transmission between sensors and a controller to reduce power consumption. The results of simulation show that the sensitivity and specificity of the proposed fall detection algorithm are both above 96% when applied to a previously collected dataset comprising 20 young actors performing a combination of simulated falls and activities of daily living. This level of performance can be achieved despite a 10.9% reduction in power consumption.

  15. Algorithm for detection of the broken phase conductor in the radial networks

    Directory of Open Access Journals (Sweden)

    Ostojić Mladen M.

    2016-01-01

    Full Text Available The paper presents an algorithm for a directional relay to be used for a detection of the broken phase conductor in the radial networks. The algorithm would use synchronized voltages, measured at the beginning and at the end of the line, as input signals. During the process, the measured voltages would be phase-compared. On the basis of the normalized energy, the direction of the phase conductor, with a broken point, would be detected. Software tool Matlab/Simulink package has developed a radial network model which simulates the broken phase conductor. The simulations generated required input signals by which the algorithm was tested. Development of the algorithm along with the formation of the simulation model and the test results of the proposed algorithm are presented in this paper.

  16. An Anomaly Detection Algorithm of Cloud Platform Based on Self-Organizing Maps

    Directory of Open Access Journals (Sweden)

    Jun Liu

    2016-01-01

    Full Text Available Virtual machines (VM on a Cloud platform can be influenced by a variety of factors which can lead to decreased performance and downtime, affecting the reliability of the Cloud platform. Traditional anomaly detection algorithms and strategies for Cloud platforms have some flaws in their accuracy of detection, detection speed, and adaptability. In this paper, a dynamic and adaptive anomaly detection algorithm based on Self-Organizing Maps (SOM for virtual machines is proposed. A unified modeling method based on SOM to detect the machine performance within the detection region is presented, which avoids the cost of modeling a single virtual machine and enhances the detection speed and reliability of large-scale virtual machines in Cloud platform. The important parameters that affect the modeling speed are optimized in the SOM process to significantly improve the accuracy of the SOM modeling and therefore the anomaly detection accuracy of the virtual machine.

  17. On Real-Time Fault Detection in Wind Turbines: Sensor Selection Algorithm and Detection Time Reduction Analysis

    Directory of Open Access Journals (Sweden)

    Francesc Pozo

    2016-07-01

    Full Text Available In this paper, we address the problem of real-time fault detection in wind turbines. Starting from a data-driven fault detection method, the contribution of this paper is twofold. First, a sensor selection algorithm is proposed with the goal to reduce the computational effort of the fault detection method. Second, an analysis is performed to reduce the data acquisition time needed by the fault detection method, that is, with the goal of reducing the fault detection time. The proposed methods are tested in a benchmark wind turbine where different actuator and sensor failures are simulated. The results demonstrate the performance and effectiveness of the proposed algorithms that dramatically reduce the number of sensors and the fault detection time.

  18. Algorithms

    Indian Academy of Sciences (India)

    , i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.

  19. Combined Dust Detection Algorithm by Using MODIS Infrared Channels over East Asia

    Science.gov (United States)

    Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve

    2014-01-01

    A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.

  20. Identifying time measurement tampering in the traversal time and hop count analysis (TTHCA) wormhole detection algorithm.

    Science.gov (United States)

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2013-05-17

    Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm.

  1. Identifying Time Measurement Tampering in the Traversal Time and Hop Count Analysis (TTHCA Wormhole Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Jonny Karlsson

    2013-05-01

    Full Text Available Traversal time and hop count analysis (TTHCA is a recent wormhole detection algorithm for mobile ad hoc networks (MANET which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm.

  2. An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data.

    Science.gov (United States)

    Nyström, Marcus; Holmqvist, Kenneth

    2010-02-01

    Event detection is used to classify recorded gaze points into periods of fixation, saccade, smooth pursuit, blink, and noise. Although there is an overall consensus that current algorithms for event detection have serious flaws and that a de facto standard for event detection does not exist, surprisingly little work has been done to remedy this problem. We suggest a new velocity-based algorithm that takes several of the previously known limitations into account. Most important, the new algorithm identifies so-called glissades, a wobbling movement at the end of many saccades, as a separate class of eye movements. Part of the solution involves designing an adaptive velocity threshold that makes the event detection less sensitive to variations in noise level and the algorithm settings-free for the user. We demonstrate the performance of the new algorithm on eye movements recorded during reading and scene perception and compare it with two of the most commonly used algorithms today. Results show that, unlike the currently used algorithms, fixations, saccades, and glissades are robustly identified by the new algorithm. Using this algorithm, we found that glissades occur in about half of the saccades, during both reading and scene perception, and that they have an average duration close to 24 msec. Due to the high prevalence and long durations of glissades, we argue that researchers must actively choose whether to assign the glissades to saccades or fixations; the choice affects dependent variables such as fixation and saccade duration significantly. Current algorithms do not offer this choice, and their assignments of each glissade are largely arbitrary.

  3. Change detection using landsat time series: A review of frequencies, preprocessing, algorithms, and applications

    Science.gov (United States)

    Zhu, Zhe

    2017-08-01

    The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.

  4. An algorithm for on-line detection of high frequency oscillations related to epilepsy.

    Science.gov (United States)

    López-Cuevas, Armando; Castillo-Toledo, Bernardino; Medina-Ceja, Laura; Ventura-Mejía, Consuelo; Pardo-Peña, Kenia

    2013-06-01

    Recent studies suggest that the appearance of signals with high frequency oscillations components in specific regions of the brain is related to the incidence of epilepsy. These oscillations are in general small in amplitude and short in duration, making them difficult to identify. The analysis of these oscillations are particularly important in epilepsy and their study could lead to the development of better medical treatments. Therefore, the development of algorithms for detection of these high frequency oscillations is of great importance. In this work, a new algorithm for automatic detection of high frequency oscillations is presented. This algorithm uses approximate entropy and artificial neural networks to extract features in order to detect and classify high frequency components in electrophysiological signals. In contrast to the existing algorithms, the one proposed here is fast and accurate, and can be implemented on-line, thus reducing the time employed to analyze the experimental electrophysiological signals. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. An algorithm for image clusters detection and identification based on color for an autonomous mobile robot

    Energy Technology Data Exchange (ETDEWEB)

    Uy, D.L.

    1996-02-01

    An algorithm for detection and identification of image clusters or {open_quotes}blobs{close_quotes} based on color information for an autonomous mobile robot is developed. The input image data are first processed using a crisp color fuszzyfier, a binary smoothing filter, and a median filter. The processed image data is then inputed to the image clusters detection and identification program. The program employed the concept of {open_quotes}elastic rectangle{close_quotes}that stretches in such a way that the whole blob is finally enclosed in a rectangle. A C-program is develop to test the algorithm. The algorithm is tested only on image data of 8x8 sizes with different number of blobs in them. The algorithm works very in detecting and identifying image clusters.

  6. The Inverse Bagging Algorithm: Anomaly Detection by Inverse Bootstrap Aggregating

    CERN Document Server

    Vischia, Pietro

    2016-01-01

    For data sets populated by a very well modeled process and by another process of unknown probability density function (PDF), a desired feature when manipulating the fraction of the unknown process (either for enhancing it or suppressing it) consists in avoiding to modify the kinematic distributions of the well modeled one. A bootstrap technique is used to identify sub-samples rich in the well modeled process, and classify each event according to the frequency of it being part of such sub-samples. Comparisons with general MVA algorithms will be shown, as well as a study of the asymptotic properties of the method, making use of a public domain data set that models a typical search for new physics as performed at hadronic colliders such as the Large Hadron Collider (LHC).

  7. Polarization Lidar Liquid Cloud Detection Algorithm for Winter Mountain Storms

    Science.gov (United States)

    Sassen, Kenneth; Zhao, Hongjie

    1992-01-01

    We have collected an extensive polarization lidar dataset from elevated sites in the Tushar Mountains of Utah in support of winter storm cloud seeding research and experiments. Our truck-mounted ruby lidar collected zenith, dual-polarization lidar data through a roof window equipped with a wiper system to prevent snowfall accumulation. Lidar returns were collected at a rate of one shot every 1 to 5 min during declared storm periods over the 1985 and 1987 mid-Jan. to mid-Mar. Field seasons. The mid-barrier remote sensor field site was located at 2.57 km MSL. Of chief interest to weather modification efforts are the heights of supercooled liquid water (SLW) clouds, which must be known to assess their 'seedability' (i.e., temperature and height suitability for artificially increasing snowfall). We are currently re-examining out entire dataset to determine the climatological properties of SLW clouds in winter storms using an autonomous computer algorithm.

  8. A Region Tracking-Based Vehicle Detection Algorithm in Nighttime Traffic Scenes

    Directory of Open Access Journals (Sweden)

    Jianqiang Wang

    2013-12-01

    Full Text Available The preceding vehicles detection technique in nighttime traffic scenes is an important part of the advanced driver assistance system (ADAS. This paper proposes a region tracking-based vehicle detection algorithm via the image processing technique. First, the brightness of the taillights during nighttime is used as the typical feature, and we use the existing global detection algorithm to detect and pair the taillights. When the vehicle is detected, a time series analysis model is introduced to predict vehicle positions and the possible region (PR of the vehicle in the next frame. Then, the vehicle is only detected in the PR. This could reduce the detection time and avoid the false pairing between the bright spots in the PR and the bright spots out of the PR. Additionally, we present a thresholds updating method to make the thresholds adaptive. Finally, experimental studies are provided to demonstrate the application and substantiate the superiority of the proposed algorithm. The results show that the proposed algorithm can simultaneously reduce both the false negative detection rate and the false positive detection rate.

  9. Fanconi anemia patients are more susceptible to infection with tumor virus SV40.

    Directory of Open Access Journals (Sweden)

    Manola Comar

    Full Text Available Fanconi anemia (FA is a recessive DNA repair disease characterized by a high predisposition to developing neoplasms. DNA tumor polyomavirus simian virus 40 (SV40 transforms FA fibroblasts at high efficiency suggesting that FA patients could be highly susceptible to SV40 infection. To test this hypothesis, the large tumor (LT antigen of SV40, BKV, JCV and Merkel Cell (MC polyomaviruses were tested in blood samples from 89 FA patients and from 82 of their parents. Two control groups consisting of 47 no-FA patients affected by other genetic bone marrow failure diseases and 91 healthy subjects were also evaluated. Although JCV, BKV and MC were not found in any of the FA samples, the prevalence and viral load of SV40 were higher in FA patients (25%; mean viral load: 1.1×10(2 copies/10(5cells as compared with healthy individuals (4.3%; mean viral load: 0.8×10(1 copies/10(5cells and genetic controls (0% (p<0.005. A marked age-dependent frequency of SV40 was found in FA with respect to healthy subjects suggesting that, although acquired early in life, the virus can widespread more easily in specific groups of population. From the analysis of family pedigrees, 60% of the parents of SV40-positive probands were positive for the virus compared to 2% of the parents of the SV40-negative probands (p<0.005. It is worthy of note that the relative frequency of SV40-positive relatives detected in this study was the highest ever reported, showing that asymptomatic FA carriers are also more susceptible to SV40. In conclusion, we favor the hypothesis that SV40 spread could be facilitated by individuals who are genetically more susceptible to infection, such as FA patients. The increased susceptibility to SV40 infection seems to be associated with a specific defect of the immune system which supports a potential interplay of SV40 with an underlying genetic alteration that increases the risk of malignancies.

  10. Bio-Inspired Distributed Decision Algorithms for Anomaly Detection

    Science.gov (United States)

    2017-03-01

    14 9 DDoS Attack...hardware implementations testing DIAMoND’s performance at detecting a variety of threats (focused primarily on DDoS and Stealth Scan attacks), over a...DNS- DDoS Attack Mitigation (DRS-ADAM). We conducted our experiments in both software simulation environment and on a dedicated hardware testbed. Our

  11. Relevant test set using feature selection algorithm for early detection ...

    African Journals Online (AJOL)

    The objective of feature selection is to find the most relevant features for classification. Thus, the dimensionality of the information will be reduced and may improve classification's accuracy. This paper proposed a minimum set of relevant questions that can be used for early detection of dyslexia. In this research, we ...

  12. An unsupervised learning algorithm for fatigue crack detection in waveguides

    Science.gov (United States)

    Rizzo, Piervincenzo; Cammarata, Marcello; Dutta, Debaditya; Sohn, Hoon; Harries, Kent

    2009-02-01

    Ultrasonic guided waves (UGWs) are a useful tool in structural health monitoring (SHM) applications that can benefit from built-in transduction, moderately large inspection ranges, and high sensitivity to small flaws. This paper describes an SHM method based on UGWs and outlier analysis devoted to the detection and quantification of fatigue cracks in structural waveguides. The method combines the advantages of UGWs with the outcomes of the discrete wavelet transform (DWT) to extract defect-sensitive features aimed at performing a multivariate diagnosis of damage. In particular, the DWT is exploited to generate a set of relevant wavelet coefficients to construct a uni-dimensional or multi-dimensional damage index vector. The vector is fed to an outlier analysis to detect anomalous structural states. The general framework presented in this paper is applied to the detection of fatigue cracks in a steel beam. The probing hardware consists of a National Instruments PXI platform that controls the generation and detection of the ultrasonic signals by means of piezoelectric transducers made of lead zirconate titanate. The effectiveness of the proposed approach to diagnose the presence of defects as small as a few per cent of the waveguide cross-sectional area is demonstrated.

  13. Detecting structural breaks in time series via genetic algorithms

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Fischer, Paul; Hilbert, Astrid

    2016-01-01

    Detecting structural breaks is an essential task for the statistical analysis of time series, for example, for fitting parametric models to it. In short, structural breaks are points in time at which the behaviour of the time series substantially changes. Typically, no solid background knowledge ...

  14. Data mining algorithms for land cover change detection: a review

    Indian Academy of Sciences (India)

    Land cover change detection has been a topic of active research in the remote sensing community. Due to enormous amount of data available from satellites, it has attracted the attention of data mining researchers to search a new direction for solution. The Terra Moderate Resolution Imaging Spectrometer(MODIS) ...

  15. An evaluation of classification algorithms for intrusion detection ...

    African Journals Online (AJOL)

    Intrusion detection system is one of the main technologies that is urgently used to monitor network traffics and identify network intrusions. Most of the available IDSs use all the 41 features in the network to evaluate and search for intrusive pattern in which some of them are redundant and irrelevant and they also generate a ...

  16. A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery.

    Science.gov (United States)

    Qian, Yonggang; Yan, Guangjian; Duan, Sibo; Kong, Xiangsheng

    2009-01-01

    The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m(2) and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m(2), only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m(2), the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection.

  17. 13C and 15N CP/MAS, 1H-15N SCT CP/MAS and FTIR spectroscopy as tools for qualitative detection of the presence of zwitterionic and non-ionic forms of ansa-macrolide 3-formylrifamycin SV and its derivatives in solid state.

    Science.gov (United States)

    Przybylski, Piotr; Pyta, Krystian; Klich, Katarzyna; Schilf, Wojciech; Kamieński, Bohdan

    2014-01-01

    (13)C, (15)N CP/MAS, including (1)H-(13)C and (1)H-(15)N short contact time CP/MAS experiments, and FTIR methods were applied for detailed structural characterization of ansa-macrolides as 3-formylrifamycin SV (1) and its derivatives (2-6) in crystal and in powder forms. Although HPLC chromatograms for 2/CH3 OH and 2/CH3 CCl3 were the same for rifampicin crystals dissolved in respective solvents, the UV-vis data recorded for them were different in 300-375 nm region. Detailed solid state (13)C and (15)N CP/MAS NMR and FTIR studies revealed that rifampicin (2), in contrast to 3-formylrifamycin SV (1) and its amino derivatives (3-6), can occur in pure non-ionic or zwitterionic forms in crystal and in pure these forms or a mixture of them in a powder. Multinuclear CP/MAS and FTIR studies demonstrated also that 3-6 derivatives were present exclusively in pure zwitterionic forms, both in powder and in crystal. On the basis of the solid state NMR and FTIR studies, two conformers of 3-formylrifamycin SV were detected in powder form due to the different orientations of carbonyl group of amide moiety. The PM6 molecular modeling at the semi-empirical level of theory, allowed visualization the most energetically favorable non-ionic and zwitterionic forms of 1-6 antibiotics, strongly stabilized via intramolecular H-bonds. FTIR studies indicated that the originally adopted forms of these type antibiotics in crystal or in powder are stable in standard laboratory conditions in time. The results presented point to the fact that because of a possible presence of two forms of rifampicin (compound 2), quantification of the content of this antibiotic in relevant pharmaceuticals needs caution. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Analysis of the moderate resolution imaging spectroradiometer contextual algorithm for small fire detection

    Science.gov (United States)

    W. Wang; J.J. Qu; X. Hao; Y. Liu

    2009-01-01

    In the southeastern United States, most wildland fires are of low intensity. Asubstantial number of these fires cannot be detected by the MODIS contextual algorithm. Toimprove the accuracy of fire detection for this region, the remote-sensed characteristics ofthese fires have to be systematically...

  19. An automated sawtooth detection algorithm for strongly varying plasma conditions and crash characteristics

    Science.gov (United States)

    Gude, A.; Maraschek, M.; Kardaun, O.; the ASDEX Upgrade Team

    2017-09-01

    A sawtooth crash algorithm that can automatically detect irregular sawteeth with strongly varying crash characteristics, including inverted crashes with central signal increase, has been developed. Such sawtooth behaviour is observed in ASDEX Upgrade with its tungsten wall, especially in phases with central ECRH. This application of ECRH for preventing impurity accumulation is envisaged also for ITER. The detection consists of three steps: a sensitive edge detection, a multichannel combination to increase detection performance, and a profile analysis that tests generic sawtooth crash features. The effect of detection parameters on the edge detection results has been investigated using synthetic signals and tested in an application to ASDEX Upgrade soft x-ray data.

  20. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    OpenAIRE

    Dong-Han Lee; Jong-Hyo Ahn; Bong-Hwan Koh

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IM...

  1. Intrusion-Aware Alert Validation Algorithm for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Young-Jae Song

    2009-07-01

    Full Text Available Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.

  2. Intrusion-aware alert validation algorithm for cooperative distributed intrusion detection schemes of wireless sensor networks.

    Science.gov (United States)

    Shaikh, Riaz Ahmed; Jameel, Hassan; d'Auriol, Brian J; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2009-01-01

    Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.

  3. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    Science.gov (United States)

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Cairn detection in southern Arabia using a supervised automatic detection algorithm and multiple sample data spectroscopic clustering

    Science.gov (United States)

    Schuetter, Jared Michael

    Excavating cairns in southern Arabia is a way for anthropologists to understand which factors led ancient settlers to transition from a pastoral lifestyle and tribal narrative to the formation of states that exist today. Locating these monuments has traditionally been done in the field, relying on eyewitness reports and costly searches through the arid landscape. In this thesis, an algorithm for automatically detecting cairns in satellite imagery is presented. The algorithm uses a set of filters in a window based approach to eliminate background pixels and other objects that do not look like cairns. The resulting set of detected objects constitutes fewer than 0.001% of the pixels in the satellite image, and contains the objects that look the most like cairns in imagery. When a training set of cairns is available, a further reduction of this set of objects can take place, along with a likelihood-based ranking system. To aid in cairn detection, the satellite image is also clustered to determine land-form classes that tend to be consistent with the presence of cairns. Due to the large number of pixels in the image, a subsample spectral clustering algorithm called "Multiple Sample Data Spectroscopic clustering" is used. This multiple sample clustering procedure is motivated by perturbation studies on single sample spectral algorithms. The studies, presented in this thesis, show that sampling variability in the single sample approach can cause an unsatisfactory level of instability in clustering results. The multiple sample data spectroscopic clustering algorithm is intended to stabilize this perturbation by combining information from different samples. While sampling variability is still present, the use of multiple samples mitigates its effect on cluster results. Finally, a step-through of the cairn detection algorithm and satellite image clustering are given for an image in the Hadramawt region of Yemen. The top ranked detected objects are presented, and a discussion

  5. Deep learning algorithms for detecting explosive hazards in ground penetrating radar data

    Science.gov (United States)

    Besaw, Lance E.; Stimac, Philip J.

    2014-05-01

    Buried explosive hazards (BEHs) have been, and continue to be, one of the most deadly threats in modern conflicts. Current handheld sensors rely on a highly trained operator for them to be effective in detecting BEHs. New algorithms are needed to reduce the burden on the operator and improve the performance of handheld BEH detectors. Traditional anomaly detection and discrimination algorithms use "hand-engineered" feature extraction techniques to characterize and classify threats. In this work we use a Deep Belief Network (DBN) to transcend the traditional approaches of BEH detection (e.g., principal component analysis and real-time novelty detection techniques). DBNs are pretrained using an unsupervised learning algorithm to generate compressed representations of unlabeled input data and form feature detectors. They are then fine-tuned using a supervised learning algorithm to form a predictive model. Using ground penetrating radar (GPR) data collected by a robotic cart swinging a handheld detector, our research demonstrates that relatively small DBNs can learn to model GPR background signals and detect BEHs with an acceptable false alarm rate (FAR). In this work, our DBNs achieved 91% probability of detection (Pd) with 1.4 false alarms per square meter when evaluated on anti-tank and anti-personnel targets at temperate and arid test sites. This research demonstrates that DBNs are a viable approach to detect and classify BEHs.

  6. A Novel Automatic Detection System for ECG Arrhythmias Using Maximum Margin Clustering with Immune Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Bohui Zhu

    2013-01-01

    Full Text Available This paper presents a novel maximum margin clustering method with immune evolution (IEMMC for automatic diagnosis of electrocardiogram (ECG arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias.

  7. Fault Detection of Aircraft System with Random Forest Algorithm and Similarity Measure

    Directory of Open Access Journals (Sweden)

    Sanghyuk Lee

    2014-01-01

    Full Text Available Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA; RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained.

  8. [Two Data Inversion Algorithms of Aerosol Horizontal Distributiol Detected by MPL and Error Analysis].

    Science.gov (United States)

    Lü, Li-hui; Liu, Wen-qing; Zhang, Tian-shu; Lu, Yi-huai; Dong, Yun-sheng; Chen, Zhen-yi; Fan, Guang-qiang; Qi, Shao-shuai

    2015-07-01

    Atmospheric aerosols have important impacts on human health, the environment and the climate system. Micro Pulse Lidar (MPL) is a new effective tool for detecting atmosphere aerosol horizontal distribution. And the extinction coefficient inversion and error analysis are important aspects of data processing. In order to detect the horizontal distribution of atmospheric aerosol near the ground, slope and Fernald algorithms were both used to invert horizontal MPL data and then the results were compared. The error analysis showed that the error of the slope algorithm and Fernald algorithm were mainly from theoretical model and some assumptions respectively. Though there still some problems exist in those two horizontal extinction coefficient inversions, they can present the spatial and temporal distribution of aerosol particles accurately, and the correlations with the forward-scattering visibility sensor are both high with the value of 95%. Furthermore relatively speaking, Fernald algorithm is more suitable for the inversion of horizontal extinction coefficient.

  9. Detection of Human Impacts by an Adaptive Energy-Based Anisotropic Algorithm

    Directory of Open Access Journals (Sweden)

    Manuel Prado-Velasco

    2013-10-01

    Full Text Available Boosted by health consequences and the cost of falls in the elderly, this work develops and tests a novel algorithm and methodology to detect human impacts that will act as triggers of a two-layer fall monitor. The two main requirements demanded by socio-healthcare providers—unobtrusiveness and reliability—defined the objectives of the research. We have demonstrated that a very agile, adaptive, and energy-based anisotropic algorithm can provide 100% sensitivity and 78% specificity, in the task of detecting impacts under demanding laboratory conditions. The algorithm works together with an unsupervised real-time learning technique that addresses the adaptive capability, and this is also presented. The work demonstrates the robustness and reliability of our new algorithm, which will be the basis of a smart falling monitor. This is shown in this work to underline the relevance of the results.

  10. Robust Mean Change-Point Detecting through Laplace Linear Regression Using EM Algorithm

    Directory of Open Access Journals (Sweden)

    Fengkai Yang

    2014-01-01

    normal distribution, we developed the expectation maximization (EM algorithm to estimate the position of mean change-point. We investigated the performance of the algorithm through different simulations, finding that our methods is robust to the distributions of errors and is effective to estimate the position of mean change-point. Finally, we applied our method to the classical Holbert data and detected a change-point.

  11. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data.

    Science.gov (United States)

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.

  12. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data.

    Directory of Open Access Journals (Sweden)

    Markus Goldstein

    Full Text Available Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.

  13. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data

    Science.gov (United States)

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601

  14. Contour detection and completion for inpainting and segmentation based on topological gradient and fast marching algorithms.

    Science.gov (United States)

    Auroux, Didier; Cohen, Laurent D; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm.

  15. Dynamic VaR Measurement of Gold Market with SV-T-MN Model

    Directory of Open Access Journals (Sweden)

    Fenglan Li

    2017-01-01

    Full Text Available VaR (Value at Risk in the gold market was measured and predicted by combining stochastic volatility (SV model with extreme value theory. Firstly, for the fat tail and volatility persistence characteristics in gold market return series, the gold price return volatility was modeled by SV-T-MN (SV-T with Mixture-of-Normal distribution model based on state space. Secondly, future sample volatility prediction was realized by using approximate filtering algorithm. Finally, extreme value theory based on generalized Pareto distribution was applied to measure dynamic risk value (VaR of gold market return. Through the proposed model on the price of gold, empirical analysis was investigated; the results show that presented combined model can measure and predict Value at Risk of the gold market reasonably and effectively and enable investors to further understand the extreme risk of gold market and take coping strategies actively.

  16. GA-DoSLD: Genetic Algorithm Based Denial-of-Sleep Attack Detection in WSN

    Directory of Open Access Journals (Sweden)

    Mahalakshmi Gunasekaran

    2017-01-01

    Full Text Available Denial-of-sleep (DoSL attack is a special category of denial-of-service attack that prevents the battery powered sensor nodes from going into the sleep mode, thus affecting the network performance. The existing schemes used for the DoSL attack detection do not provide an optimal energy conservation and key pairing operation. Hence, in this paper, an efficient Genetic Algorithm (GA based denial-of-sleep attack detection (GA-DoSLD algorithm is suggested for analyzing the misbehaviors of the nodes. The suggested algorithm implements a Modified-RSA (MRSA algorithm in the base station (BS for generating and distributing the key pair among the sensor nodes. Before sending/receiving the packets, the sensor nodes determine the optimal route using Ad Hoc On-Demand Distance Vector Routing (AODV protocol and then ensure the trustworthiness of the relay node using the fitness calculation. The crossover and mutation operations detect and analyze the methods that the attackers use for implementing the attack. On determining an attacker node, the BS broadcasts the blocked information to all the other sensor nodes in the network. Simulation results prove that the suggested algorithm is optimal compared to the existing algorithms such as X-MAC, ZKP, and TE2P schemes.

  17. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  18. A high-order statistical tensor based algorithm for anomaly detection in hyperspectral imagery.

    Science.gov (United States)

    Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao

    2014-11-04

    Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm.

  19. [An improved morphological edge detection algorithm of medical image based on multi-structure element].

    Science.gov (United States)

    Luo, Xiaogang; Liu, Ting; Peng, Chenglin; Wen, Li

    2009-02-01

    An improved edge detection algorithm is proposed in this paper for the medical images with strong noises and fuzzy edges. The algorithm modified the combination of morphological operations, so that the unclear edges of the images are avoided. In this paper is also introduced the algorithm of multi-structure elements which can reserve integrated edges from different directions of the images. Furthermore, the contrast enhancement and morphological filter processing are implemented. This method can detect the edges efficiently, keep the detected edges smooth and obtain coherent image edges. Experiments demonstrate that this edge detector has a better performance of noise reduction and keeps the edges more accurate than do the traditional edge detectors; thus its practicality is enhanced.

  20. Track-Before-Detect Algorithm for Weak Extended Target Based on Particle Filter under Clutter Environment

    Directory of Open Access Journals (Sweden)

    Wu Sunyong

    2017-06-01

    Full Text Available The Track-Before-Detect (TBD algorithm based on the particle filter is proposed for weak extended target detection and tracking in low signal to clutter noise radio. The rod-shaped object is analyzed by dividing the cell on range and azimuth under the Weibull clutter. On the basis of a point target, the likelihood function and particle weights can be obtained by the target spread function. In the TBD algorithm, the binary target variable and the target shape parameters is added to the state vector and the scattering points in the sample collection is given based on the particle filter, which can detect and estimate the target state and the shape parameters under the clutter environment. Simulation results show that the stability of the algorithm is very good.

  1. Label propagation algorithm for community detection based on node importance and label influence

    Science.gov (United States)

    Zhang, Xian-Kun; Ren, Jing; Song, Chen; Jia, Jia; Zhang, Qian

    2017-09-01

    Recently, the detection of high-quality community has become a hot spot in the research of social network. Label propagation algorithm (LPA) has been widely concerned since it has the advantages of linear time complexity and is unnecessary to define objective function and the number of community in advance. However, LPA has the shortcomings of uncertainty and randomness in the label propagation process, which affects the accuracy and stability of the community. For large-scale social network, this paper proposes a novel label propagation algorithm for community detection based on node importance and label influence (LPA_NI). The experiments with comparative algorithms on real-world networks and synthetic networks have shown that LPA_NI can significantly improve the quality of community detection and shorten the iteration period. Also, it has better accuracy and stability in the case of similar complexity.

  2. Evolutionary algorithm for automatic detection of blood vessel shapes

    Science.gov (United States)

    Kutics, Andrea

    1996-04-01

    Automatic detection of blood vessel shapes locating in the skin has a great diagnostic importance. In this work, an evolutionary approach operating on morphological operator and operation structures is proposed for the determination of the shape and network of blood vessels located in upper skin layers. A population of individuals comprising morphological structures is generated. A two-dimensional queue like data representation of individuals is applied in order to provide an appropriate representation of the connectivity constraints originated in the two dimensional nature of the structuring elements. Two-dimensional crossover and mutation type manipulation operations are carried out on selected elements of the population. Unlike the usual techniques, in our approach no constraints are used for background and smoothness as no matched filter or linear operator is applied. Also no a priori knowledge of the vessel shape is necessary due to the evolutionary method. Unlike the usual imaging techniques, that mainly use angiograms as input, in this work infrared filtered images taken by CCD camera are applied to investigate the blood vessels of broad skin areas. The method is implemented parallel on a lattice network of transputers resulting in a significantly decreased processing time compared to the usual techniques.

  3. Algorithms for real-time fault detection of the Space Shuttle Main Engine

    Science.gov (United States)

    Ruiz, C. A.; Hawman, M. W.; Galinaitis, W. S.

    1992-01-01

    This paper reports on the results of a program to develop and demonstrate concepts related to a realtime health management system (HMS) for the Space Shuttle Main Engine (SSME). An HMS framework was developed on the basis of a top-down analysis of the current rocket engine failure modes and the engine monitoring requirements. One result of Phase I of this program was the identification of algorithmic approaches for detecting failures of the SSME. Three different analytical techniques were developed which demonstrated the capability to detect failures significantly earlier than the existing redlines. Based on promising initial results, Phase II of the program was initiated to further validate and refine the fault detection strategy on a large data base of 140 SSME test firings, and implement the resultant algorithms in real time. The paper begins with an overview of the refined algorithms used to detect failures during SSME start-up and main-stage operation. Results of testing these algorithms on a data base of nominal and off-nominal SSME test firings is discussed. The paper concludes with a discussion of the performance of the algorithms operating on a real-time computer system.

  4. Algorithms

    Indian Academy of Sciences (India)

    Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.

  5. Algorithms

    Indian Academy of Sciences (India)

    number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.

  6. Integration of a Self-Coherence Algorithm into DISAT for Forced Oscillation Detection

    Energy Technology Data Exchange (ETDEWEB)

    Follum, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-03-03

    With the increasing number of phasor measurement units on the power system, behaviors typically not observable on the power system are becoming more apparent. Oscillatory behavior on the power system, notably forced oscillations, are one such behavior. However, the large amounts of data coming from the PMUs makes manually detecting and locating these oscillations difficult. To automate portions of the process, an oscillation detection routine was coded into the Data Integrity and Situational Awareness Tool (DISAT) framework. Integration into the DISAT framework allows forced oscillations to be detected and information about the event provided to operational engineers. The oscillation detection algorithm integrates with the data handling and atypical data detecting capabilities of DISAT, building off of a standard library of functions. This report details that integration with information on the algorithm, some implementation issues, and some sample results from the western United States’ power grid.

  7. Virtual-Lattice Based Intrusion Detection Algorithm over Actuator-Assisted Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jing Yan

    2017-05-01

    Full Text Available Due to the lack of a physical line of defense, intrusion detection becomes one of the key issues in applications of underwater wireless sensor networks (UWSNs, especially when the confidentiality has prime importance. However, the resource-constrained property of UWSNs such as sparse deployment and energy constraint makes intrusion detection a challenging issue. This paper considers a virtual-lattice-based approach to the intrusion detection problem in UWSNs. Different from most existing works, the UWSNs consist of two kinds of nodes, i.e., sensor nodes (SNs, which cannot move autonomously, and actuator nodes (ANs, which can move autonomously according to the performance requirement. With the cooperation of SNs and ANs, the intruder detection probability is defined. Then, a virtual lattice-based monitor (VLM algorithm is proposed to detect the intruder. In order to reduce the redundancy of communication links and improve detection probability, an optimal and coordinative lattice-based monitor patrolling (OCLMP algorithm is further provided for UWSNs, wherein an equal price search strategy is given for ANs to find the shortest patrolling path. Under VLM and OCLMP algorithms, the detection probabilities are calculated, while the topology connectivity can be guaranteed. Finally, simulation results are presented to show that the proposed method in this paper can improve the detection accuracy and save the energy consumption compared with the conventional methods.

  8. Virtual-Lattice Based Intrusion Detection Algorithm over Actuator-Assisted Underwater Wireless Sensor Networks.

    Science.gov (United States)

    Yan, Jing; Li, Xiaolei; Luo, Xiaoyuan; Guan, Xinping

    2017-05-20

    Due to the lack of a physical line of defense, intrusion detection becomes one of the key issues in applications of underwater wireless sensor networks (UWSNs), especially when the confidentiality has prime importance. However, the resource-constrained property of UWSNs such as sparse deployment and energy constraint makes intrusion detection a challenging issue. This paper considers a virtual-lattice-based approach to the intrusion detection problem in UWSNs. Different from most existing works, the UWSNs consist of two kinds of nodes, i.e., sensor nodes (SNs), which cannot move autonomously, and actuator nodes (ANs), which can move autonomously according to the performance requirement. With the cooperation of SNs and ANs, the intruder detection probability is defined. Then, a virtual lattice-based monitor (VLM) algorithm is proposed to detect the intruder. In order to reduce the redundancy of communication links and improve detection probability, an optimal and coordinative lattice-based monitor patrolling (OCLMP) algorithm is further provided for UWSNs, wherein an equal price search strategy is given for ANs to find the shortest patrolling path. Under VLM and OCLMP algorithms, the detection probabilities are calculated, while the topology connectivity can be guaranteed. Finally, simulation results are presented to show that the proposed method in this paper can improve the detection accuracy and save the energy consumption compared with the conventional methods.

  9. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.

    Science.gov (United States)

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter.

  10. Development of an IMU-based foot-ground contact detection (FGCD) algorithm.

    Science.gov (United States)

    Kim, Myeongkyu; Lee, Donghun

    2017-03-01

    It is well known that, to locate humans in GPS-denied environments, a lower limb kinematic solution based on Inertial Measurement Unit (IMU), force plate, and pressure insoles is essential. The force plate and pressure insole are used to detect foot-ground contacts. However, the use of multiple sensors is not desirable in most cases. This paper documents the development of an IMU-based FGCD (foot-ground contact detection) algorithm considering the variations of both walking terrain and speed. All IMU outputs showing significant changes on the moments of foot-ground contact phases are fully identified through experiments in five walking terrains. For the experiment on each walking terrain, variations of walking speeds are also examined to confirm the correlations between walking speed and the main parameters in the FGCD algorithm. As experimental results, FGCD algorithm successfully detecting four contact phases is developed, and validation of performance of the FGCD algorithm is also implemented. Practitioner Summary: In this research, it was demonstrated that the four contact phases of Heel strike (or Toe strike), Full contact, Heel off and Toe off can be independently detected regardless of the walking speed and walking terrain based on the detection criteria composed of the ranges and the rates of change of the main parameters measured from the Inertial Measurement Unit sensors.

  11. Algorithms for the detection of chewing behavior in dietary monitoring applications

    Science.gov (United States)

    Schmalz, Mark S.; Helal, Abdelsalam; Mendez-Vasquez, Andres

    2009-08-01

    The detection of food consumption is key to the implementation of successful behavior modification in support of dietary monitoring and therapy, for example, during the course of controlling obesity, diabetes, or cardiovascular disease. Since the vast majority of humans consume food via mastication (chewing), we have designed an algorithm that automatically detects chewing behaviors in surveillance video of a person eating. Our algorithm first detects the mouth region, then computes the spatiotemporal frequency spectrum of a small perioral region (including the mouth). Spectral data are analyzed to determine the presence of periodic motion that characterizes chewing. A classifier is then applied to discriminate different types of chewing behaviors. Our algorithm was tested on seven volunteers, whose behaviors included chewing with mouth open, chewing with mouth closed, talking, static face presentation (control case), and moving face presentation. Early test results show that the chewing behaviors induce a temporal frequency peak at 0.5Hz to 2.5Hz, which is readily detected using a distance-based classifier. Computational cost is analyzed for implementation on embedded processing nodes, for example, in a healthcare sensor network. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm, and its estimated error. It is shown that chewing detection is possible within a computationally efficient, accurate, and subject-independent framework.

  12. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Directory of Open Access Journals (Sweden)

    A F M Saifuddin Saif

    Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  13. Leakage Detection and Estimation Algorithm for Loss Reduction in Water Piping Networks

    Directory of Open Access Journals (Sweden)

    Kazeem B. Adedeji

    2017-10-01

    Full Text Available Water loss through leaking pipes constitutes a major challenge to the operational service of water utilities. In recent years, increasing concern about the financial loss and environmental pollution caused by leaking pipes has been driving the development of efficient algorithms for detecting leakage in water piping networks. Water distribution networks (WDNs are disperse in nature with numerous number of nodes and branches. Consequently, identifying the segment(s of the network and the exact leaking pipelines connected to this segment(s where higher background leakage outflow occurs is a challenging task. Background leakage concerns the outflow from small cracks or deteriorated joints. In addition, because they are diffuse flow, they are not characterised by quick pressure drop and are not detectable by measuring instruments. Consequently, they go unreported for a long period of time posing a threat to water loss volume. Most of the existing research focuses on the detection and localisation of burst type leakages which are characterised by a sudden pressure drop. In this work, an algorithm for detecting and estimating background leakage in water distribution networks is presented. The algorithm integrates a leakage model into a classical WDN hydraulic model for solving the network leakage flows. The applicability of the developed algorithm is demonstrated on two different water networks. The results of the tested networks are discussed and the solutions obtained show the benefits of the proposed algorithm. A noteworthy evidence is that the algorithm permits the detection of critical segments or pipes of the network experiencing higher leakage outflow and indicates the probable pipes of the network where pressure control can be performed. However, the possible position of pressure control elements along such critical pipes will be addressed in future work.

  14. Multivariate anomaly detection for Earth observations: a comparison of algorithms and feature extraction techniques

    Directory of Open Access Journals (Sweden)

    M. Flach

    2017-08-01

    Full Text Available Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach and their combinations (ensembles that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to

  15. Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm.

    Science.gov (United States)

    Ffrench, P A; Zeidler, J H; Ku, W H

    1997-01-01

    Two-dimensional (2-D) adaptive filtering is a technique that can be applied to many image processing applications. This paper will focus on the development of an improved 2-D adaptive lattice algorithm (2-D AL) and its application to the removal of correlated clutter to enhance the detectability of small objects in images. The two improvements proposed here are increased flexibility in the calculation of the reflection coefficients and a 2-D method to update the correlations used in the 2-D AL algorithm. The 2-D AL algorithm is shown to predict correlated clutter in image data and the resulting filter is compared with an ideal Wiener-Hopf filter. The results of the clutter removal will be compared to previously published ones for a 2-D least mean square (LMS) algorithm. 2-D AL is better able to predict spatially varying clutter than the 2-D LMS algorithm, since it converges faster to new image properties. Examples of these improvements are shown for a spatially varying 2-D sinusoid in white noise and simulated clouds. The 2-D LMS and 2-D AL algorithms are also shown to enhance a mammogram image for the detection of small microcalcifications and stellate lesions.

  16. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    Directory of Open Access Journals (Sweden)

    Jiahui Meng

    2018-01-01

    Full Text Available In order to improve the performance of non-binary low-density parity check codes (LDPC hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER of 10−5 over an additive white Gaussian noise (AWGN channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  17. A review of feature detection and match algorithms for localization and mapping

    Science.gov (United States)

    Li, Shimiao

    2017-09-01

    Localization and mapping is an essential ability of a robot to keep track of its own location in an unknown environment. Among existing methods for this purpose, vision-based methods are more effective solutions for being accurate, inexpensive and versatile. Vision-based methods can generally be categorized as feature-based approaches and appearance-based approaches. The feature-based approaches prove higher performance in textured scenarios. However, their performance depend highly on the applied feature-detection algorithms. In this paper, we surveyed algorithms for feature detection, which is an essential step in achieving vision-based localization and mapping. In this pater, we present mathematical models of the algorithms one after another. To compare the performances of the algorithms, we conducted a series of experiments on their accuracy, speed, scale invariance and rotation invariance. The results of the experiments showed that ORB is the fastest algorithm in detecting and matching features, the speed of which is more than 10 times that of SURF and approximately 40 times that of SIFT. And SIFT, although with no advantage in terms of speed, shows the most correct matching pairs and proves its accuracy.

  18. Wavelet neural networks initialization using hybridized clustering and harmony search algorithm: Application in epileptic seizure detection

    Science.gov (United States)

    Zainuddin, Zarita; Lai, Kee Huong; Ong, Pauline

    2013-04-01

    Artificial neural networks (ANNs) are powerful mathematical models that are used to solve complex real world problems. Wavelet neural networks (WNNs), which were developed based on the wavelet theory, are a variant of ANNs. During the training phase of WNNs, several parameters need to be initialized; including the type of wavelet activation functions, translation vectors, and dilation parameter. The conventional k-means and fuzzy c-means clustering algorithms have been used to select the translation vectors. However, the solution vectors might get trapped at local minima. In this regard, the evolutionary harmony search algorithm, which is capable of searching for near-optimum solution vectors, both locally and globally, is introduced to circumvent this problem. In this paper, the conventional k-means and fuzzy c-means clustering algorithms were hybridized with the metaheuristic harmony search algorithm. In addition to obtaining the estimation of the global minima accurately, these hybridized algorithms also offer more than one solution to a particular problem, since many possible solution vectors can be generated and stored in the harmony memory. To validate the robustness of the proposed WNNs, the real world problem of epileptic seizure detection was presented. The overall classification accuracy from the simulation showed that the hybridized metaheuristic algorithms outperformed the standard k-means and fuzzy c-means clustering algorithms.

  19. A density based algorithm to detect cavities and holes from planar points

    Science.gov (United States)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  20. An improved label propagation algorithm based on node importance and random walk for community detection

    Science.gov (United States)

    Ma, Tianren; Xia, Zhengyou

    2017-05-01

    Currently, with the rapid development of information technology, the electronic media for social communication is becoming more and more popular. Discovery of communities is a very effective way to understand the properties of complex networks. However, traditional community detection algorithms consider the structural characteristics of a social organization only, with more information about nodes and edges wasted. In the meanwhile, these algorithms do not consider each node on its merits. Label propagation algorithm (LPA) is a near linear time algorithm which aims to find the community in the network. It attracts many scholars owing to its high efficiency. In recent years, there are more improved algorithms that were put forward based on LPA. In this paper, an improved LPA based on random walk and node importance (NILPA) is proposed. Firstly, a list of node importance is obtained through calculation. The nodes in the network are sorted in descending order of importance. On the basis of random walk, a matrix is constructed to measure the similarity of nodes and it avoids the random choice in the LPA. Secondly, a new metric IAS (importance and similarity) is calculated by node importance and similarity matrix, which we can use to avoid the random selection in the original LPA and improve the algorithm stability. Finally, a test in real-world and synthetic networks is given. The result shows that this algorithm has better performance than existing methods in finding community structure.

  1. Automatic face detection and tracking based on Adaboost with camshift algorithm

    Science.gov (United States)

    Lin, Hui; Long, JianFeng

    2011-10-01

    With the development of information technology, video surveillance is widely used in security monitoring and identity recognition. For most of pure face tracking algorithms are hard to specify the initial location and scale of face automatically, this paper proposes a fast and robust method to detect and track face by combining adaboost with camshift algorithm. At first, the location and scale of face is specified by adaboost algorithm based on Haar-like features and it will be conveyed to the initial search window automatically. Then, we apply camshift algorithm to track face. The experimental results based on OpenCV software yield good results, even in some special circumstances, such as light changing and face rapid movement. Besides, by drawing out the tracking trajectory of face movement, some abnormal behavior events can be analyzed.

  2. Adaptive Weighted Morphology Detection Algorithm of Plane Object in Docking Guidance System

    Directory of Open Access Journals (Sweden)

    Guo yan-ying

    2010-09-01

    Full Text Available In this paper, we presented an image segmentation algorithm based on adaptive weighted mathematical morphology edge detectors. The performance of the proposed algorithm has been demonstrated on the Lena image. The input of the proposed algorithm is a grey level image. The image was first processed by the mathematical morphological closing and dilation residue edge detector to enhance the edge features and sketch out the contour of the image, respectively. Then the adaptive weight SE operation was applied to the edge-extracted image to fuse edge gaps and hill up holds. Experimental results show it can not only primely extract detail edge, but also superbly preserve integer effect comparative to classical edge detection algorithm.

  3. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  4. Cloud detection algorithm comparison and validation for operational Landsat data products

    Science.gov (United States)

    Foga, Steven Curtis; Scaramuzza, Pat; Guo, Song; Zhu, Zhe; Dilley, Ronald; Beckmann, Tim; Schmidt, Gail L.; Dwyer, John L.; Hughes, MJ; Laue, Brady

    2017-01-01

    Clouds are a pervasive and unavoidable issue in satellite-borne optical imagery. Accurate, well-documented, and automated cloud detection algorithms are necessary to effectively leverage large collections of remotely sensed data. The Landsat project is uniquely suited for comparative validation of cloud assessment algorithms because the modular architecture of the Landsat ground system allows for quick evaluation of new code, and because Landsat has the most comprehensive manual truth masks of any current satellite data archive. Currently, the Landsat Level-1 Product Generation System (LPGS) uses separate algorithms for determining clouds, cirrus clouds, and snow and/or ice probability on a per-pixel basis. With more bands onboard the Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) satellite, and a greater number of cloud masking algorithms, the U.S. Geological Survey (USGS) is replacing the current cloud masking workflow with a more robust algorithm that is capable of working across multiple Landsat sensors with minimal modification. Because of the inherent error from stray light and intermittent data availability of TIRS, these algorithms need to operate both with and without thermal data. In this study, we created a workflow to evaluate cloud and cloud shadow masking algorithms using cloud validation masks manually derived from both Landsat 7 Enhanced Thematic Mapper Plus (ETM +) and Landsat 8 OLI/TIRS data. We created a new validation dataset consisting of 96 Landsat 8 scenes, representing different biomes and proportions of cloud cover. We evaluated algorithm performance by overall accuracy, omission error, and commission error for both cloud and cloud shadow. We found that CFMask, C code based on the Function of Mask (Fmask) algorithm, and its confidence bands have the best overall accuracy among the many algorithms tested using our validation data. The Artificial Thermal-Automated Cloud Cover Algorithm (AT-ACCA) is the most accurate

  5. Syndromic algorithms for detection of gambiense human African trypanosomiasis in South Sudan.

    Directory of Open Access Journals (Sweden)

    Jennifer J Palmer

    Full Text Available Active screening by mobile teams is considered the best method for detecting human African trypanosomiasis (HAT caused by Trypanosoma brucei gambiense but the current funding context in many post-conflict countries limits this approach. As an alternative, non-specialist health care workers (HCWs in peripheral health facilities could be trained to identify potential cases who need testing based on their symptoms. We explored the predictive value of syndromic referral algorithms to identify symptomatic cases of HAT among a treatment-seeking population in Nimule, South Sudan.Symptom data from 462 patients (27 cases presenting for a HAT test via passive screening over a 7 month period were collected to construct and evaluate over 14,000 four item syndromic algorithms considered simple enough to be used by peripheral HCWs. For comparison, algorithms developed in other settings were also tested on our data, and a panel of expert HAT clinicians were asked to make referral decisions based on the symptom dataset. The best performing algorithms consisted of three core symptoms (sleep problems, neurological problems and weight loss, with or without a history of oedema, cervical adenopathy or proximity to livestock. They had a sensitivity of 88.9-92.6%, a negative predictive value of up to 98.8% and a positive predictive value in this context of 8.4-8.7%. In terms of sensitivity, these out-performed more complex algorithms identified in other studies, as well as the expert panel. The best-performing algorithm is predicted to identify about 9/10 treatment-seeking HAT cases, though only 1/10 patients referred would test positive.In the absence of regular active screening, improving referrals of HAT patients through other means is essential. Systematic use of syndromic algorithms by peripheral HCWs has the potential to increase case detection and would increase their participation in HAT programmes. The algorithms proposed here, though promising, should be

  6. A hybrid algorithm for multiple change-point detection in continuous measurements

    Science.gov (United States)

    Priyadarshana, W. J. R. M.; Polushina, T.; Sofronov, G.

    2013-10-01

    Array comparative genomic hybridization (aCGH) is one of the techniques that can be used to detect copy number variations in DNA sequences. It has been identified that abrupt changes in the human genome play a vital role in the progression and development of many diseases. We propose a hybrid algorithm that utilizes both the sequential techniques and the Cross-Entropy method to estimate the number of change points as well as their locations in aCGH data. We applied the proposed hybrid algorithm to both artificially generated data and real data to illustrate the usefulness of the methodology. Our results show that the proposed algorithm is an effective method to detect multiple change-points in continuous measurements.

  7. Technology-aware algorithm design for neural spike detection, feature extraction, and dimensionality reduction.

    Science.gov (United States)

    Gibson, Sarah; Judy, Jack W; Marković, Dejan

    2010-10-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to 1) obtain single-unit activity and 2) perform data reduction for wireless data transmission. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection, feature-extraction, and dimensionality-reduction algorithms for spike sorting are described and evaluated in terms of accuracy versus complexity. The nonlinear energy operator is chosen as the optimal spike-detection algorithm, being most robust over noise and relatively simple. Discrete derivatives is chosen as the optimal feature-extraction method, maintaining high accuracy across signal-to-noise ratios with a complexity orders of magnitude less than that of traditional methods such as principal-component analysis. We introduce the maximum-difference algorithm, which is shown to be the best dimensionality-reduction method for hardware spike sorting.

  8. A new, fast algorithm for detecting protein coevolution using maximum compatible cliques

    Directory of Open Access Journals (Sweden)

    Rose Jonathan

    2011-06-01

    Full Text Available Abstract Background The MatrixMatchMaker algorithm was recently introduced to detect the similarity between phylogenetic trees and thus the coevolution between proteins. MMM finds the largest common submatrices between pairs of phylogenetic distance matrices, and has numerous advantages over existing methods of coevolution detection. However, these advantages came at the cost of a very long execution time. Results In this paper, we show that the problem of finding the maximum submatrix reduces to a multiple maximum clique subproblem on a graph of protein pairs. This allowed us to develop a new algorithm and program implementation, MMMvII, which achieved more than 600× speedup with comparable accuracy to the original MMM. Conclusions MMMvII will thus allow for more more extensive and intricate analyses of coevolution. Availability An implementation of the MMMvII algorithm is available at: http://www.uhnresearch.ca/labs/tillier/MMMWEBvII/MMMWEBvII.php

  9. Detection of Urban Damage Using Remote Sensing and Machine Learning Algorithms: Revisiting the 2010 Haiti Earthquake

    Directory of Open Access Journals (Sweden)

    Austin J. Cooner

    2016-10-01

    Full Text Available Remote sensing continues to be an invaluable tool in earthquake damage assessments and emergency response. This study evaluates the effectiveness of multilayer feedforward neural networks, radial basis neural networks, and Random Forests in detecting earthquake damage caused by the 2010 Port-au-Prince, Haiti 7.0 moment magnitude (Mw event. Additionally, textural and structural features including entropy, dissimilarity, Laplacian of Gaussian, and rectangular fit are investigated as key variables for high spatial resolution imagery classification. Our findings show that each of the algorithms achieved nearly a 90% kernel density match using the United Nations Operational Satellite Applications Programme (UNITAR/UNOSAT dataset as validation. The multilayer feedforward network was able to achieve an error rate below 40% in detecting damaged buildings. Spatial features of texture and structure were far more important in algorithmic classification than spectral information, highlighting the potential for future implementation of machine learning algorithms which use panchromatic or pansharpened imagery alone.

  10. A Robust Automated Cataract Detection Algorithm Using Diagnostic Opinion Based Parameter Thresholding for Telemedicine Application

    Directory of Open Access Journals (Sweden)

    Shashwat Pathak

    2016-09-01

    Full Text Available This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true color images of an eye. An algorithm is proposed for cataract screening based on texture features: uniformity, intensity and standard deviation. These features are first computed and mapped with diagnostic opinion by the eye expert to define the basic threshold of screening system and later tested on real subjects in an eye clinic. Finally, a tele-ophthamology model using our proposed system has been suggested, which confirms the telemedicine application of the proposed system.

  11. An Improved ARIMA-Based Traffic Anomaly Detection Algorithm for Wireless Sensor Networks

    OpenAIRE

    Qin Yu; Lyu Jibin; Lirui Jiang

    2016-01-01

    Traffic anomaly detection is emerging as a necessary component as wireless networks gain popularity. In this paper, based on the improved Autoregressive Integrated Moving Average (ARIMA) model, we propose a traffic anomaly detection algorithm for wireless sensor networks (WSNs) which considers the particular imbalanced, nonstationary properties of the WSN traffic and the limited energy and computing capacity of the wireless sensors at the same time. We systematically analyze the characteristi...

  12. Obstacles Detection Algorithm in Forest based on Multi-sensor Data Fusion

    OpenAIRE

    Xiaokang Ding; Lei Yan; Jinhao Liu; Jianlei Kong; Zheng Yu

    2013-01-01

    In this paper an obstacle detection system was set up and a novel algorithm was proposed to detect obstacles in forest, aiming to improve the efficiency of automated operations and reduce the risk to cause the accident of forestry harvester in the complicated environment of forest areas. First a 2D laser scanner and an infrared thermal imager were used to collect the information of obstacles in forest areas. Second the features of each obstacle including temperature, color, aspect ratio, and ...

  13. CPU, GPU and FPGA Implementations of MALD: Ceramic Tile Surface Defects Detection Algorithm

    OpenAIRE

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko

    2014-01-01

    This paper addresses adjustments, implementation and performance comparison of the Moving Average with Local Difference (MALD) method for ceramic tile surface defects detection. Ceramic tile production process is completely autonomous, except the final stage where human eye is required for defects detection. Recent computational platform development and advances in machine vision provides us with several options for MALD algorithm implementation. In order to exploit the shortest execution tim...

  14. Validation of New Signal Detection Methods for Web Query Log Data Compared to Signal Detection Algorithms Used With FAERS.

    Science.gov (United States)

    Colilla, Susan; Tov, Elad Yom; Zhang, Ling; Kurzinger, Marie-Laure; Tcherny-Lessenot, Stephanie; Penfornis, Catherine; Jen, Shang; Gonzalez, Danny S; Caubel, Patrick; Welsh, Susan; Juhaeri, Juhaeri

    2017-05-01

    Post-marketing drug surveillance is largely based on signals found in spontaneous reports from patients and healthcare providers. Rare adverse drug reactions and adverse events (AEs) that may develop after long-term exposure to a drug or from drug interactions may be missed. The US FDA and others have proposed that web-based data could be mined as a resource to detect latent signals associated with adverse drug reactions. Recently, a web-based search query method called a query log reaction score (QLRS) was developed to detect whether AEs associated with certain drugs could be found from search engine query data. In this study, we compare the performance of two other algorithms, the proportional query ratio (PQR) and the proportional query rate ratio (Q-PRR) against that of two reference signal-detection algorithms (SDAs) commonly used with the FDA AE Reporting System (FAERS) database. In summary, the web query methods have moderate sensitivity (80%) in detecting signals in web query data compared with reference SDAs in FAERS when the web query data are filtered, but the query metrics generate many false-positives and have low specificity compared with reference SDAs in FAERS. Future research is needed to find better refinements of query data and/or the metrics to improve the specificity of these web query log algorithms.

  15. Comparison of Different Classification Algorithms for the Detection of User's Interaction with Windows in Office Buildings

    DEFF Research Database (Denmark)

    Markovic, Romana; Wolf, Sebastian; Cao, Jun

    2017-01-01

    classification algorithms for detecting occupant's interactions with windows, while taking the imbalanced properties of the available data set into account. The tested methods include support vector machines (SVM), random forests, and their combination with dynamic Bayesian networks (DBN). The results will show...

  16. A Time-Domain Structural Damage Detection Method Based on Improved Multiparticle Swarm Coevolution Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shao-Fei Jiang

    2014-01-01

    Full Text Available Optimization techniques have been applied to structural health monitoring and damage detection of civil infrastructures for two decades. The standard particle swarm optimization (PSO is easy to fall into the local optimum and such deficiency also exists in the multiparticle swarm coevolution optimization (MPSCO. This paper presents an improved MPSCO algorithm (IMPSCO firstly and then integrates it with Newmark’s algorithm to localize and quantify the structural damage by using the damage threshold proposed. To validate the proposed method, a numerical simulation and an experimental study of a seven-story steel frame were employed finally, and a comparison was made between the proposed method and the genetic algorithm (GA. The results show threefold: (1 the proposed method not only is capable of localization and quantification of damage, but also has good noise-tolerance; (2 the damage location can be accurately detected using the damage threshold proposed in this paper; and (3 compared with the GA, the IMPSCO algorithm is more efficient and accurate for damage detection problems in general. This implies that the proposed method is applicable and effective in the community of damage detection and structural health monitoring.

  17. Evolutionary Algorithms for the Detection of Structural Breaks in Time Series

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Fischer, Paul; Hilbert, Astrid

    2013-01-01

    series under consideration is available. Therefore, a black-box optimization approach is our method of choice for detecting structural breaks. We describe a evolutionary algorithm framework which easily adapts to a large number of statistical settings. The experiments on artificial and real-world time...

  18. Algorithm for Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar Gradiometer

    Science.gov (United States)

    2016-06-01

    TECHNICAL REPORT Algorithm for Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar...Distribution Statement A This document has been cleared for public release This report was prepared under contract to...constitute or imply its endorsement, recommendation, or favoring by the Department of Defense. REPORT DOCUMENTATION PAGE Standard Form 298 (Rev. 8

  19. Drug Safety Monitoring in Children: Performance of Signal Detection Algorithms and Impact of Age Stratification

    NARCIS (Netherlands)

    O.U. Osokogu (Osemeke); C. Dodd (Caitlin); A.C. Pacurariu (Alexandra C.); F. Kaguelidou (Florentia); D.M. Weibel (Daniel); M.C.J.M. Sturkenboom (Miriam)

    2016-01-01

    textabstractIntroduction: Spontaneous reports of suspected adverse drug reactions (ADRs) can be analyzed to yield additional drug safety evidence for the pediatric population. Signal detection algorithms (SDAs) are required for these analyses; however, the performance of SDAs in the pediatric

  20. Review of electronic-nose technologies and algorithms to detect hazardous chemicals in the environment

    Science.gov (United States)

    Alphus D. Wilson

    2012-01-01

    Novel mobile electronic-nose (e-nose) devices and algorithms capable of real-time detection of industrial and municipal pollutants, released from point-sources, recently have been developed by scientists worldwide that are useful for monitoring specific environmental-pollutant levels for enforcement and implementation of effective pollution-abatement programs. E-nose...

  1. CoSMOS: Performance of Kurtosis Algorithm for Radio Frequency Interference Detection and Mitigation

    DEFF Research Database (Denmark)

    Misra, Sidharth; Kristensen, Steen Savstrup; Skou, Niels

    2007-01-01

    The performance of a previously developed algorithm for Radio Frequency Interference (RFI) detection and mitigation is experimentally evaluated. Results obtained from CoSMOS, an airborne campaign using a fully polarimetric L-band radiometer are analyzed for this purpose. Data is collected using two...

  2. Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection

    Science.gov (United States)

    Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas

    2011-01-01

    Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…

  3. NIC: a robust background extraction algorithm for foreground detection in dynamic scenes

    NARCIS (Netherlands)

    Huynh-The, Thien; Banos Legran, Oresti; Lee, Sungyoung; Kang, Byeong Ho; Kim, Eun-Soo; Le-Tien, Thuong

    2016-01-01

    This paper presents a robust foreground detection method capable of adapting to different motion speeds in scenes. A key contribution of this paper is the background estimation using a proposed novel algorithm, neighbor-based intensity correction (NIC), that identifies and modifies the motion pixels

  4. Credit card fraud detection: An application of the gene expression messy genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kargupta, H.; Gattiker, J.R.; Buescher, K.

    1996-05-01

    This paper describes an application of the recently introduced gene expression messy genetic algorithm (GEMGA) (Kargupta, 1996) for detecting fraudulent transactions of credit cards. It also explains the fundamental concepts underlying the GEMGA in the light of the SEARCH (Search Envisioned As Relation and Class Hierarchizing) (Kargupta, 1995) framework.

  5. MEMS-based sensing and algorithm development for fall detection and gait analysis

    Science.gov (United States)

    Gupta, Piyush; Ramirez, Gabriel; Lie, Donald Y. C.; Dallas, Tim; Banister, Ron E.; Dentino, Andrew

    2010-02-01

    Falls by the elderly are highly detrimental to health, frequently resulting in injury, high medical costs, and even death. Using a MEMS-based sensing system, algorithms are being developed for detecting falls and monitoring the gait of elderly and disabled persons. In this study, wireless sensors utilize Zigbee protocols were incorporated into planar shoe insoles and a waist mounted device. The insole contains four sensors to measure pressure applied by the foot. A MEMS based tri-axial accelerometer is embedded in the insert and a second one is utilized by the waist mounted device. The primary fall detection algorithm is derived from the waist accelerometer. The differential acceleration is calculated from samples received in 1.5s time intervals. This differential acceleration provides the quantification via an energy index. From this index one may ascertain different gait and identify fall events. Once a pre-determined index threshold is exceeded, the algorithm will classify an event as a fall or a stumble. The secondary algorithm is derived from frequency analysis techniques. The analysis consists of wavelet transforms conducted on the waist accelerometer data. The insole pressure data is then used to underline discrepancies in the transforms, providing more accurate data for classifying gait and/or detecting falls. The range of the transform amplitude in the fourth iteration of a Daubechies-6 transform was found sufficient to detect and classify fall events.

  6. Fall-Detection Algorithm Using 3-Axis Acceleration: Combination with Simple Threshold and Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Dongha Lim

    2014-01-01

    Full Text Available Falls are a serious medical and social problem among the elderly. This has led to the development of automatic fall-detection systems. To detect falls, a fall-detection algorithm that combines a simple threshold method and hidden Markov model (HMM using 3-axis acceleration is proposed. To apply the proposed fall-detection algorithm and detect falls, a wearable fall-detection device has been designed and produced. Several fall-feature parameters of 3-axis acceleration are introduced and applied to a simple threshold method. Possible falls are chosen through the simple threshold and are applied to two types of HMM to distinguish between a fall and an activity of daily living (ADL. The results using the simple threshold, HMM, and combination of the simple method and HMM were compared and analyzed. The combination of the simple threshold method and HMM reduced the complexity of the hardware and the proposed algorithm exhibited higher accuracy than that of the simple threshold method.

  7. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults

    Directory of Open Access Journals (Sweden)

    Rui Sun

    2017-09-01

    Full Text Available The use of Unmanned Aerial Vehicles (UAVs has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs’ flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  8. An Efficient Algorithm for the Detection of Exposed and Hidden Wormhole Attack

    Directory of Open Access Journals (Sweden)

    ZUBAIR AHMED KHAN

    2016-07-01

    Full Text Available MANETs (Mobile Ad Hoc Networks are slowly integrating into our everyday lives, their most prominent uses are visible in the disaster and war struck areas where physical infrastructure is almost impossible or very hard to build. MANETs like other networks are facing the threat of malicious users and their activities. A number of attacks have been identified but the most severe of them is the wormhole attack which has the ability to succeed even in case of encrypted traffic and secure networks. Once wormhole is launched successfully, the severity increases by the fact that attackers can launch other attacks too. This paper presents a comprehensive algorithm for the detection of exposed as well as hidden wormhole attack while keeping the detection rate to maximum and at the same reducing false alarms. The algorithm does not require any extra hardware, time synchronization or any special type of nodes. The architecture consists of the combination of Routing Table, RTT (Round Trip Time and RSSI (Received Signal Strength Indicator for comprehensive detection of wormhole attack. The proposed technique is robust, light weight, has low resource requirements and provides real-time detection against the wormhole attack. Simulation results show that the algorithm is able to provide a higher detection rate, packet delivery ratio, negligible false alarms and is also better in terms of Ease of Implementation, Detection Accuracy/ Speed and processing overhead.

  9. An Algorithm for Detection of DVB-T Signals Based on Their Second-Order Statistics

    Directory of Open Access Journals (Sweden)

    Jallon Pierre

    2008-01-01

    Full Text Available Abstract We propose in this paper a detection algorithm based on a cost function that jointly tests the correlation induced by the cyclic prefix and the fact that this correlation is time-periodic. In the first part of the paper, the cost function is introduced and some analytical results are given. In particular, the noise and multipath channel impacts on its values are theoretically analysed. In a second part of the paper, some asymptotic results are derived. A first exploitation of these results is used to build a detection test based on the false alarm probability. These results are also used to evaluate the impact of the number of cycle frequencies taken into account in the cost function on the detection performances. Thanks to numerical estimations, we have been able to estimate that the proposed algorithm detects DVB-T signals with an SNR of  dB. As a comparison, and in the same context, the detection algorithm proposed by the 802.22 WG in 2006 is able to detect these signals with an SNR of  dB.

  10. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.

    Science.gov (United States)

    Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto

    2017-09-29

    The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  11. Evolution of Testing Algorithms at a Tertiary Care Hospital for the Detection of Clostridium difficile Infections

    Science.gov (United States)

    Qutub, Mohammed; Govindan, Parasanth; Vattappillil, Anupama

    2017-01-01

    Abstract Background Clostridium difficile-associated diarrhea (CDAD) is the commonest cause of nosocomial diarrhea. Methods for C.difficle detection include toxins or enzyme detection by immunoassays, cytotoxicity neutralization assay (CCNA) or FDA approved PCR. Due to the tedious and time consuming nature of the CCNA and the suboptimal specificity and sensitivity of EIAs, these assays cannot be used as stand-alone tests. One approach of combining these assays, is by two-step algorithm, where Ag-EIAs is used as screening test and confirmation of positives either by a toxin detection enzyme immunoassays or by CCNA. Another approach is a three-step algorithm, where Ag-EIAs is used as screening test, and all positives are tested by a toxin detection EIA and if toxin detection negative, further tested either by PCR or by CCNA. Therefore we aimed to evaluate a new two-step algorithm for the detection of toxigenic CD and its role in improvement of turn-around-time. Methods A total of 3518 nonformed stool specimens from suspected cases of CDAD were collected. Specimens were tested either by GDH-toxin A/B ICA; or by GeneXpert C. diificile PCR as per the algorithm (Figure 1). Results Of 3518 stool specimens tested; 130 (3.70%) were positive and 2989 (84.96%) were negative by GDH-toxin A/B ICA while 399 (11.34%) required PCR. None of the negative GDH and positive toxin A/B samples tested positive by PCR. Also, none of the negative GDH and negative toxin A/B samples tested positive by PCR (Figure 2). Conclusion Study indicates that when the GDH-toxin A/B ICA is used, almost 89 % of the results could be reported within 30 minutes; about 3.7 % of them being positive results and 84.96 % being negative. Confirmation of the discrepant GDH and Toxin A/B results was by PCR. The new algorithm offered rapid detection of C.difficile by ICA, judicious use of PCR and effectively reduced turnaround time. Figure-1: Two-step algorithm for C difficile testing. Figure-2: Results of two

  12. Analysis of the Chirplet Transform-Based Algorithm for Radar Detection of Accelerated Targets

    Science.gov (United States)

    Galushko, V. G.; Vavriv, D. M.

    2017-06-01

    Purpose: Efficiency analysis of an optimal algorithm of chirp signal processing based on the chirplet transform as applied to detection of radar targets in uniformly accelerated motion. Design/methodology/approach: Standard methods of the optimal filtration theory are used to investigate the ambiguity function of chirp signals. Findings: An analytical expression has been derived for the ambiguity function of chirp signals that is analyzed with respect to detection of radar targets moving at a constant acceleration. Sidelobe level and characteristic width of the ambiguity function with respect to the coordinates frequency and rate of its change have been estimated. The gain in the signal-to-noise ratio has been assessed that is provided by the algorithm under consideration as compared with application of the standard Fourier transform to detection of chirp signals against a “white” noise background. It is shown that already with a comparatively small (number of processing channels (elementary filters with respect to the frequency change rate) the gain in the signal-tonoise ratio exceeds 10 dB. A block diagram of implementation of the algorithm under consideration is suggested on the basis of a multichannel weighted Fourier transform. Recommendations as for selection of the detection algorithm parameters have been developed. Conclusions: The obtained results testify to efficiency of application of the algorithm under consideration to detection of radar targets moving at a constant acceleration. Nevertheless, it seems expedient to perform computer simulations of its operability with account for the noise impact along with trial measurements in real conditions.

  13. Development and implementation of an algorithm for detection of protein complexes in large interaction networks

    Directory of Open Access Journals (Sweden)

    Kanaya Shigehiko

    2006-04-01

    Full Text Available Abstract Background After complete sequencing of a number of genomes the focus has now turned to proteomics. Advanced proteomics technologies such as two-hybrid assay, mass spectrometry etc. are producing huge data sets of protein-protein interactions which can be portrayed as networks, and one of the burning issues is to find protein complexes in such networks. The enormous size of protein-protein interaction (PPI networks warrants development of efficient computational methods for extraction of significant complexes. Results This paper presents an algorithm for detection of protein complexes in large interaction networks. In a PPI network, a node represents a protein and an edge represents an interaction. The input to the algorithm is the associated matrix of an interaction network and the outputs are protein complexes. The complexes are determined by way of finding clusters, i. e. the densely connected regions in the network. We also show and analyze some protein complexes generated by the proposed algorithm from typical PPI networks of Escherichia coli and Saccharomyces cerevisiae. A comparison between a PPI and a random network is also performed in the context of the proposed algorithm. Conclusion The proposed algorithm makes it possible to detect clusters of proteins in PPI networks which mostly represent molecular biological functional units. Therefore, protein complexes determined solely based on interaction data can help us to predict the functions of proteins, and they are also useful to understand and explain certain biological processes.

  14. A feature matching and fusion-based positive obstacle detection algorithm for field autonomous land vehicles

    Directory of Open Access Journals (Sweden)

    Tao Wu

    2017-03-01

    Full Text Available Positive obstacles will cause damage to field robotics during traveling in field. Field autonomous land vehicle is a typical field robotic. This article presents a feature matching and fusion-based algorithm to detect obstacles using LiDARs for field autonomous land vehicles. There are three main contributions: (1 A novel setup method of compact LiDAR is introduced. This method improved the LiDAR data density and reduced the blind region of the LiDAR sensor. (2 A mathematical model is deduced under this new setup method. The ideal scan line is generated by using the deduced mathematical model. (3 Based on the proposed mathematical model, a feature matching and fusion (FMAF-based algorithm is presented in this article, which is employed to detect obstacles. Experimental results show that the performance of the proposed algorithm is robust and stable, and the computing time is reduced by an order of two magnitudes by comparing with other exited algorithms. This algorithm has been perfectly applied to our autonomous land vehicle, which has won the champion in the challenge of Chinese “Overcome Danger 2014” ground unmanned vehicle.

  15. The Efficacy of Epidemic Algorithms on Detecting Node Replicas in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Narasimha Shashidhar

    2015-12-01

    Full Text Available A node replication attack against a wireless sensor network involves surreptitious efforts by an adversary to insert duplicate sensor nodes into the network while avoiding detection. Due to the lack of tamper-resistant hardware and the low cost of sensor nodes, launching replication attacks takes little effort to carry out. Naturally, detecting these replica nodes is a very important task and has been studied extensively. In this paper, we propose a novel distributed, randomized sensor duplicate detection algorithm called Discard to detect node replicas in group-deployed wireless sensor networks. Our protocol is an epidemic, self-organizing duplicate detection scheme, which exhibits emergent properties. Epidemic schemes have found diverse applications in distributed computing: load balancing, topology management, audio and video streaming, computing aggregate functions, failure detection, network and resource monitoring, to name a few. To the best of our knowledge, our algorithm is the first attempt at exploring the potential of this paradigm to detect replicas in a wireless sensor network. Through analysis and simulation, we show that our scheme achieves robust replica detection with substantially lower communication, computational and storage requirements than prior schemes in the literature.

  16. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

    Directory of Open Access Journals (Sweden)

    Ricardo Acevedo-Avila

    2016-05-01

    Full Text Available Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  17. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    Science.gov (United States)

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-05-28

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  18. Unsupervised, low latency anomaly detection of algorithmically generated domain names by generative probabilistic modeling.

    Science.gov (United States)

    Raghuram, Jayaram; Miller, David J; Kesidis, George

    2014-07-01

    We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates.

  19. A Novel Algorithm for Intrusion Detection Based on RASL Model Checking

    Directory of Open Access Journals (Sweden)

    Weijun Zhu

    2013-01-01

    Full Text Available The interval temporal logic (ITL model checking (MC technique enhances the power of intrusion detection systems (IDSs to detect concurrent attacks due to the strong expressive power of ITL. However, an ITL formula suffers from difficulty in the description of the time constraints between different actions in the same attack. To address this problem, we formalize a novel real-time interval temporal logic—real-time attack signature logic (RASL. Based on such a new logic, we put forward a RASL model checking algorithm. Furthermore, we use RASL formulas to describe attack signatures and employ discrete timed automata to create an audit log. As a result, RASL model checking algorithm can be used to automatically verify whether the automata satisfy the formulas, that is, whether the audit log coincides with the attack signatures. The simulation experiments show that the new approach effectively enhances the detection power of the MC-based intrusion detection methods for a number of telnet attacks, p-trace attacks, and the other sixteen types of attacks. And these experiments indicate that the new algorithm can find several types of real-time attacks, whereas the existing MC-based intrusion detection approaches cannot do that.

  20. An improved algorithm of laser spot center detection in strong noise background

    Science.gov (United States)

    Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong

    2018-01-01

    Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.

  1. A Self-embedding Robust Digital Watermarking Algorithm with Blind Detection

    Directory of Open Access Journals (Sweden)

    Gong Yunfeng

    2014-08-01

    Full Text Available In order to achieve the perfectly blind detection of robustness watermarking algorithm, a novel self-embedding robust digital watermarking algorithm with blind detection is proposed in this paper. Firstly the original image is divided to not overlap image blocks and then decomposable coefficients are obtained by lifting-based wavelet transform in every image blocks. Secondly the low-frequency coefficients of block images are selected and then approximately represented as a product of a base matrix and a coefficient matrix using NMF. Then the feature vector represent original image is obtained by quantizing coefficient matrix, and finally the adaptive quantization of the robustness watermark is embedded in the low-frequency coefficients of LWT. Experimental results show that the scheme is robust against common signal processing attacks, meanwhile perfect blind detection is achieve.

  2. Evaluation of anomaly detection algorithm using trans-admittance mammography with 60 × 60 electrode array.

    Science.gov (United States)

    Zhao, Mingkang; Wi, Hun; Oh, Tong In; Woo, Eung Je

    2013-01-01

    Electrical impedance imaging has a potential to detect an early stage of breast cancer due to higher admittivity values compared with those of normal breast tissues. Specially, tumor size and extent of axillary lymph node involvement are important parameters to evaluate the breast cancer survival rate. We applied the anomaly detection algorithm to the high density trans-admittance mammography system for estimating the size and position of breast cancer. We tested 4 different size of anomaly with 3 different conductivity contrasts at 5 different depths. From a frequency difference trans-admittance map, we can readily observe the transversal position and estimate its size and depth. However, the size estimation was dependent on the admittivity contrast between anomaly and background. It requires the robust detection algorithm regardless of the conductivity contrast.

  3. A Forest Early Fire Detection Algorithm Based on Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    CHENG Qiang

    2014-03-01

    Full Text Available Wireless Sensor Networks (WSN adopt GHz as their communication carrier, and it has been found that GHz carrier attenuation model in transmission path is associated with vegetation water content. In this paper, based on RSSI mechanism of WSN nodes we formed vegetation dehydration sensors. Through relationships between vegetation water content and carrier attenuation, we perceived forest vegetation water content variations and early fire gestation process, and established algorithms of early forest fires detection. Experiments confirm that wireless sensor networks can accurately perceive vegetation dehydration events and forest fire events. Simulation results show that, WSN dehydration perception channel (P2P representing 75 % amounts of carrier channel or more, it can meet the detection requirements, which presented a new algorithm of early forest fire detection.

  4. Evaluation of novel algorithm embedded in a wearable sEMG device for seizure detection

    DEFF Research Database (Denmark)

    Conradsen, Isa; Beniczky, Sandor; Wolf, Peter

    2012-01-01

    We implemented a modified version of a previously published algorithm for detection of generalized tonic-clonic seizures into a prototype wireless surface electromyography (sEMG) recording device. The method was modified to require minimum computational load, and two parameters were trained...... on prior sEMG data recorded with the device. Along with the normal sEMG recording, the device is able to set an alarm whenever the implemented algorithm detects a seizure. These alarms are annotated in the data file along with the signal. The device was tested at the Epilepsy Monitoring Unit (EMU......) at the Danish Epilepsy Center. Five patients were included in the study and two of them had generalized tonic-clonic seizures. All patients were monitored for 2–5 days. A double-blind study was made on the five patients. The overall result showed that the device detected four of seven seizures and had a false...

  5. Replica Node Detection Using Enhanced Single Hop Detection with Clonal Selection Algorithm in Mobile Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    L. S. Sindhuja

    2016-01-01

    Full Text Available Security of Mobile Wireless Sensor Networks is a vital challenge as the sensor nodes are deployed in unattended environment and they are prone to various attacks. One among them is the node replication attack. In this, the physically insecure nodes are acquired by the adversary to clone them by having the same identity of the captured node, and the adversary deploys an unpredictable number of replicas throughout the network. Hence replica node detection is an important challenge in Mobile Wireless Sensor Networks. Various replica node detection techniques have been proposed to detect these replica nodes. These methods incur control overheads and the detection accuracy is low when the replica is selected as a witness node. This paper proposes to solve these issues by enhancing the Single Hop Detection (SHD method using the Clonal Selection algorithm to detect the clones by selecting the appropriate witness nodes. The advantages of the proposed method include (i increase in the detection ratio, (ii decrease in the control overhead, and (iii increase in throughput. The performance of the proposed work is measured using detection ratio, false detection ratio, packet delivery ratio, average delay, control overheads, and throughput. The implementation is done using ns-2 to exhibit the actuality of the proposed work.

  6. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    Directory of Open Access Journals (Sweden)

    P. Amudha

    2015-01-01

    Full Text Available Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC with Enhanced Particle Swarm Optimization (EPSO to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup’99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.

  7. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.

    Science.gov (United States)

    Amudha, P; Karthik, S; Sivakumari, S

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.

  8. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    Science.gov (United States)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts

  9. Evaluation of accelerometer-based fall detection algorithms on real-world falls.

    Directory of Open Access Journals (Sweden)

    Fabio Bagalà

    Full Text Available Despite extensive preventive efforts, falls continue to be a major source of morbidity and mortality among elderly. Real-time detection of falls and their urgent communication to a telecare center may enable rapid medical assistance, thus increasing the sense of security of the elderly and reducing some of the negative consequences of falls. Many different approaches have been explored to automatically detect a fall using inertial sensors. Although previously published algorithms report high sensitivity (SE and high specificity (SP, they have usually been tested on simulated falls performed by healthy volunteers. We recently collected acceleration data during a number of real-world falls among a patient population with a high-fall-risk as part of the SensAction-AAL European project. The aim of the present study is to benchmark the performance of thirteen published fall-detection algorithms when they are applied to the database of 29 real-world falls. To the best of our knowledge, this is the first systematic comparison of fall detection algorithms tested on real-world falls. We found that the SP average of the thirteen algorithms, was (mean ± std 83.0% ± 30.3% (maximum value = 98%. The SE was considerably lower (SE = 57.0% ± 27.3%, maximum value = 82.8%, much lower than the values obtained on simulated falls. The number of false alarms generated by the algorithms during 1-day monitoring of three representative fallers ranged from 3 to 85. The factors that affect the performance of the published algorithms, when they are applied to the real-world falls, are also discussed. These findings indicate the importance of testing fall-detection algorithms in real-life conditions in order to produce more effective automated alarm systems with higher acceptance. Further, the present results support the idea that a large, shared real-world fall database could, potentially, provide an enhanced understanding of the fall process and the information needed

  10. Evaluation of accelerometer-based fall detection algorithms on real-world falls.

    Science.gov (United States)

    Bagalà, Fabio; Becker, Clemens; Cappello, Angelo; Chiari, Lorenzo; Aminian, Kamiar; Hausdorff, Jeffrey M; Zijlstra, Wiebren; Klenk, Jochen

    2012-01-01

    Despite extensive preventive efforts, falls continue to be a major source of morbidity and mortality among elderly. Real-time detection of falls and their urgent communication to a telecare center may enable rapid medical assistance, thus increasing the sense of security of the elderly and reducing some of the negative consequences of falls. Many different approaches have been explored to automatically detect a fall using inertial sensors. Although previously published algorithms report high sensitivity (SE) and high specificity (SP), they have usually been tested on simulated falls performed by healthy volunteers. We recently collected acceleration data during a number of real-world falls among a patient population with a high-fall-risk as part of the SensAction-AAL European project. The aim of the present study is to benchmark the performance of thirteen published fall-detection algorithms when they are applied to the database of 29 real-world falls. To the best of our knowledge, this is the first systematic comparison of fall detection algorithms tested on real-world falls. We found that the SP average of the thirteen algorithms, was (mean ± std) 83.0% ± 30.3% (maximum value = 98%). The SE was considerably lower (SE = 57.0% ± 27.3%, maximum value = 82.8%), much lower than the values obtained on simulated falls. The number of false alarms generated by the algorithms during 1-day monitoring of three representative fallers ranged from 3 to 85. The factors that affect the performance of the published algorithms, when they are applied to the real-world falls, are also discussed. These findings indicate the importance of testing fall-detection algorithms in real-life conditions in order to produce more effective automated alarm systems with higher acceptance. Further, the present results support the idea that a large, shared real-world fall database could, potentially, provide an enhanced understanding of the fall process and the information needed to design and

  11. The Hessian Blob Algorithm: Precise Particle Detection in Atomic Force Microscopy Imagery.

    Science.gov (United States)

    Marsh, Brendan P; Chada, Nagaraju; Sanganna Gari, Raghavendar Reddy; Sigdel, Krishna P; King, Gavin M

    2018-01-17

    Imaging by atomic force microscopy (AFM) offers high-resolution descriptions of many biological systems; however, regardless of resolution, conclusions drawn from AFM images are only as robust as the analysis leading to those conclusions. Vital to the analysis of biomolecules in AFM imagery is the initial detection of individual particles from large-scale images. Threshold and watershed algorithms are conventional for automatic particle detection but demand manual image preprocessing and produce particle boundaries which deform as a function of user-defined parameters, producing imprecise results subject to bias. Here, we introduce the Hessian blob to address these shortcomings. Combining a scale-space framework with measures of local image curvature, the Hessian blob formally defines particle centers and their boundaries, both to subpixel precision. Resulting particle boundaries are independent of user defined parameters, with no image preprocessing required. We demonstrate through direct comparison that the Hessian blob algorithm more accurately detects biomolecules than conventional AFM particle detection techniques. Furthermore, the algorithm proves largely insensitive to common imaging artifacts and noise, delivering a stable framework for particle analysis in AFM.

  12. An improved reconstruction algorithm based on multi-user detection for uplink grant-free NOMA

    Directory of Open Access Journals (Sweden)

    Hou Chengyan

    2017-01-01

    Full Text Available For the traditional orthogonal matching pursuit(OMP algorithm in multi-user detection(MUD for uplink grant-free NOMA, here is a poor BER performance, so in this paper we propose an temporal-correlation orthogonal matching pursuit algorithm(TOMP to realize muli-user detection. The core idea of the TOMP is to use the time correlation of the active user sets to achieve user activity and data detection in a number of continuous time slots. We use the estimated active user set in the current time slot as a priori information to estimate the active user sets for the next slot. By maintaining the active user set Tˆl of size K(K is the number of users, but modified in each iteration. Specifically, active user set is believed to be reliable in one iteration but shown error in another iteration, can be added to the set path delay Tˆl or removed from it. Theoretical analysis of the improved algorithm provide a guarantee that the multi-user can be successfully detected with a high probability. The simulation results show that the proposed scheme can achieve better bit error rate (BER performance in the uplink grant-free NOMA system.

  13. Bladed wheels damage detection through Non-Harmonic Fourier Analysis improved algorithm

    Science.gov (United States)

    Neri, P.

    2017-05-01

    Recent papers introduced the Non-Harmonic Fourier Analysis for bladed wheels damage detection. This technique showed its potential in estimating the frequency of sinusoidal signals even when the acquisition time is short with respect to the vibration period, provided that some hypothesis are fulfilled. Anyway, previously proposed algorithms showed severe limitations in cracks detection at their early stage. The present paper proposes an improved algorithm which allows to detect a blade vibration frequency shift due to a crack whose size is really small compared to the blade width. Such a technique could be implemented for condition-based maintenance, allowing to use non-contact methods for vibration measurements. A stator-fixed laser sensor could monitor all the blades as they pass in front of the spot, giving precious information about the wheel health. This configuration determines an acquisition time for each blade which become shorter as the machine rotational speed increases. In this situation, traditional Discrete Fourier Transform analysis results in poor frequency resolution, being not suitable for small frequency shift detection. Non-Harmonic Fourier Analysis instead showed high reliability in vibration frequency estimation even with data samples collected in a short time range. A description of the improved algorithm is provided in the paper, along with a comparison with the previous one. Finally, a validation of the method is presented, based on finite element simulations results.

  14. An algorithmic implementation of physical reflective boundary conditions in particle methods: Collision detection and response

    Science.gov (United States)

    Fraga Filho, Carlos Alberto Dutra

    2017-11-01

    The aim of this paper is to present a computational algorithmic implementation of physical reflective boundary conditions and applications, for use in particle methods. It is motivated by the lack of a straightforward study in the literature dedicated to the presentation of this reflective boundary condition, based on Newton's restitution law and the foundations of analytic geometry. Particular attention is given here to the procedures of collision detection and response. The importance of the consistency of input data and an appropriate temporal integration technique for use in the particle method is also discussed. Validation tests are performed, with the results of the algorithm verified using analytical results. Numerical simulations of static and dynamic problems are carried out. The analysis of the numerical results shows that the physical reflective boundary conditions are consistent and that the algorithm has been properly implemented.

  15. Contour Detection and Completion for Inpainting and Segmentation Based on Topological Gradient and Fast Marching Algorithms

    Directory of Open Access Journals (Sweden)

    Didier Auroux

    2011-01-01

    Full Text Available We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm.

  16. SHADOW DETECTION FROM VERY HIGH RESOLUTON SATELLITE IMAGE USING GRABCUT SEGMENTATION AND RATIO-BAND ALGORITHMS

    Directory of Open Access Journals (Sweden)

    N. M. S. M. Kadhim

    2015-03-01

    Full Text Available Very-High-Resolution (VHR satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour, the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates

  17. Evaluating and Improving Automatic Sleep Spindle Detection by Using Multi-Objective Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Min-Yin Liu

    2017-05-01

    Full Text Available Sleep spindles are brief bursts of brain activity in the sigma frequency range (11–16 Hz measured by electroencephalography (EEG mostly during non-rapid eye movement (NREM stage 2 sleep. These oscillations are of great biological and clinical interests because they potentially play an important role in identifying and characterizing the processes of various neurological disorders. Conventionally, sleep spindles are identified by expert sleep clinicians via visual inspection of EEG signals. The process is laborious and the results are inconsistent among different experts. To resolve the problem, numerous computerized methods have been developed to automate the process of sleep spindle identification. Still, the performance of these automated sleep spindle detection methods varies inconsistently from study to study. There are two reasons: (1 the lack of common benchmark databases, and (2 the lack of commonly accepted evaluation metrics. In this study, we focus on tackling the second problem by proposing to evaluate the performance of a spindle detector in a multi-objective optimization context and hypothesize that using the resultant Pareto fronts for deriving evaluation metrics will improve automatic sleep spindle detection. We use a popular multi-objective evolutionary algorithm (MOEA, the Strength Pareto Evolutionary Algorithm (SPEA2, to optimize six existing frequency-based sleep spindle detection algorithms. They include three Fourier, one continuous wavelet transform (CWT, and two Hilbert-Huang transform (HHT based algorithms. We also explore three hybrid approaches. Trained and tested on open-access DREAMS and MASS databases, two new hybrid methods of combining Fourier with HHT algorithms show significant performance improvement with F1-scores of 0.726–0.737.

  18. Clustering and Candidate Motif Detection in Exosomal miRNAs by Application of Machine Learning Algorithms.

    Science.gov (United States)

    Gaur, Pallavi; Chaturvedi, Anoop

    2017-07-22

    The clustering pattern and motifs give immense information about any biological data. An application of machine learning algorithms for clustering and candidate motif detection in miRNAs derived from exosomes is depicted in this paper. Recent progress in the field of exosome research and more particularly regarding exosomal miRNAs has led much bioinformatic-based research to come into existence. The information on clustering pattern and candidate motifs in miRNAs of exosomal origin would help in analyzing existing, as well as newly discovered miRNAs within exosomes. Along with obtaining clustering pattern and candidate motifs in exosomal miRNAs, this work also elaborates the usefulness of the machine learning algorithms that can be efficiently used and executed on various programming languages/platforms. Data were clustered and sequence candidate motifs were detected successfully. The results were compared and validated with some available web tools such as 'BLASTN' and 'MEME suite'. The machine learning algorithms for aforementioned objectives were applied successfully. This work elaborated utility of machine learning algorithms and language platforms to achieve the tasks of clustering and candidate motif detection in exosomal miRNAs. With the information on mentioned objectives, deeper insight would be gained for analyses of newly discovered miRNAs in exosomes which are considered to be circulating biomarkers. In addition, the execution of machine learning algorithms on various language platforms gives more flexibility to users to try multiple iterations according to their requirements. This approach can be applied to other biological data-mining tasks as well.

  19. Real-time implementation of a multispectral mine target detection algorithm

    Science.gov (United States)

    Samson, Joseph W.; Witter, Lester J.; Kenton, Arthur C.; Holloway, John H., Jr.

    2003-09-01

    Spatial-spectral anomaly detection (the "RX Algorithm") has been exploited on the USMC's Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) and several associated technology base studies, and has been found to be a useful method for the automated detection of surface-emplaced antitank land mines in airborne multispectral imagery. RX is a complex image processing algorithm that involves the direct spatial convolution of a target/background mask template over each multispectral image, coupled with a spatially variant background spectral covariance matrix estimation and inversion. The RX throughput on the ATD was about 38X real time using a single Sun UltraSparc system. A goal to demonstrate RX in real-time was begun in FY01. We now report the development and demonstration of a Field Programmable Gate Array (FPGA) solution that achieves a real-time implementation of the RX algorithm at video rates using COBRA ATD data. The approach uses an Annapolis Microsystems Firebird PMC card containing a Xilinx XCV2000E FPGA with over 2,500,000 logic gates and 18MBytes of memory. A prototype system was configured using a Tek Microsystems VME board with dual-PowerPC G4 processors and two PMC slots. The RX algorithm was translated from its C programming implementation into the VHDL language and synthesized into gates that were loaded into the FPGA. The VHDL/synthesizer approach allows key RX parameters to be quickly changed and a new implementation automatically generated. Reprogramming the FPGA is done rapidly and in-circuit. Implementation of the RX algorithm in a single FPGA is a major first step toward achieving real-time land mine detection.

  20. Analysis of Correlation between an Accelerometer-Based Algorithm for Detecting Parkinsonian Gait and UPDRS Subscales

    Directory of Open Access Journals (Sweden)

    Alejandro Rodríguez-Molinero

    2017-09-01

    Full Text Available BackgroundOur group earlier developed a small monitoring device, which uses accelerometer measurements to accurately detect motor fluctuations in patients with Parkinson’s (On and Off state based on an algorithm that characterizes gait through the frequency content of strides. To further validate the algorithm, we studied the correlation of its outputs with the motor section of the Unified Parkinson’s Disease Rating Scale part-III (UPDRS-III.MethodSeventy-five patients suffering from Parkinson’s disease were asked to walk both in the Off and the On state while wearing the inertial sensor on the waist. Additionally, all patients were administered the motor section of the UPDRS in both motor phases. Tests were conducted at the patient’s home. Convergence between the algorithm and the scale was evaluated by using the Spearman’s correlation coefficient.ResultsCorrelation with the UPDRS-III was moderate (rho −0.56; p < 0.001. Correlation between the algorithm outputs and the gait item in the UPDRS-III was good (rho −0.73; p < 0.001. The factorial analysis of the UPDRS-III has repeatedly shown that several of its items can be clustered under the so-called Factor 1: “axial function, balance, and gait.” The correlation between the algorithm outputs and this factor of the UPDRS-III was −0.67 (p < 0.01.ConclusionThe correlation achieved by the algorithm with the UPDRS-III scale suggests that this algorithm might be a useful tool for monitoring patients with Parkinson’s disease and motor fluctuations.

  1. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    Science.gov (United States)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  2. A study of position independent algorithms for phone-based gait frequency detection.

    Science.gov (United States)

    Tarashansky, Alexander; Vathsangam, Harshvardhan; Sukhatme, Gaurav S

    2014-01-01

    Estimating gait frequency is an important component in the detection and diagnosis of various medical conditions. Smartphone-based kinematic sensors offer a window of opportunity in free-living gait frequency estimation. The main issue with smartphone-based gait frequency estimation algorithms is how to adjust for variations in orientation and location of the phone on the human body. While numerous algorithms have been implemented to account for these differences, little work has been done in comparing these algorithms. In this study, we compare various position independent algorithms to determine which are more suited to robust gait frequency estimation. Using sensor data collected from volunteers walking with a smartphone, we examine the effect of using three different time series with the magnitude, weighted sum, and closest vertical component algorithms described in the paper. We also test two different methods of extracting step frequency: time domain peak counting and spectral analysis. The results show that the choice of time series does not significantly affect the accuracy of frequency measurements. Furthermore, both time domain and spectral approaches show comparable results. However, time domain approaches are sensitive to false-positives while spectral approaches require a minimum set of repetitive measurements. Our study suggests a hybrid approach where both time-domain and spectral approaches be used together to complement each other's shortcomings.

  3. Decomposition-Based Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Social Networks

    Directory of Open Access Journals (Sweden)

    Jingjing Ma

    2014-01-01

    Full Text Available Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.

  4. Algorithm development for automated outlier detection and background noise reduction during NIR spectroscopic data processing

    Science.gov (United States)

    Abookasis, David; Workman, Jerome J.

    2011-09-01

    This study describes a hybrid processing algorithm for use during calibration/validation of near-infrared spectroscopic signals based on a spectra cross-correlation and filtering process, combined with a partial-least square regression (PLS) analysis. In the first step of the algorithm, exceptional signals (outliers) are detected and remove based on spectra correlation criteria we have developed. Then, signal filtering based on direct orthogonal signal correction (DOSC) was applied, before being used in the PLS model, to filter out background variance. After outlier screening and DOSC treatment, a PLS calibration model matrix is formed. Once this matrix has been built, it is used to predict the concentration of the unknown samples. Common statistics such as standard error of cross-validation, mean relative error, coefficient of determination, etc. were computed to assess the fitting ability of the algorithm Algorithm performance was tested on several hundred blood samples prepared at different hematocrit and glucose levels using blood materials from thirteen healthy human volunteers. During measurements, these samples were subjected to variations in temperature, flow rate, and sample pathlength. Experimental results highlight the potential, applicability, and effectiveness of the proposed algorithm in terms of low error of prediction, high sensitivity and specificity, and low false negative (Type II error) samples.

  5. Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study

    Science.gov (United States)

    Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.

  6. A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion

    Science.gov (United States)

    Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen

    2017-09-01

    In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.

  7. Automatic ultrasonic breast lesions detection using support vector machine based algorithm

    Science.gov (United States)

    Yeh, Chih-Kuang; Miao, Shan-Jung; Fan, Wei-Che; Chen, Yung-Sheng

    2007-03-01

    It is difficult to automatically detect tumors and extract lesion boundaries in ultrasound images due to the variance in shape, the interference from speckle noise, and the low contrast between objects and background. The enhancement of ultrasonic image becomes a significant task before performing lesion classification, which was usually done with manual delineation of the tumor boundaries in the previous works. In this study, a linear support vector machine (SVM) based algorithm is proposed for ultrasound breast image training and classification. Then a disk expansion algorithm is applied for automatically detecting lesions boundary. A set of sub-images including smooth and irregular boundaries in tumor objects and those in speckle-noised background are trained by the SVM algorithm to produce an optimal classification function. Based on this classification model, each pixel within an ultrasound image is classified into either object or background oriented pixel. This enhanced binary image can highlight the object and suppress the speckle noise; and it can be regarded as degraded paint character (DPC) image containing closure noise, which is well known in perceptual organization of psychology. An effective scheme of removing closure noise using iterative disk expansion method has been successfully demonstrated in our previous works. The boundary detection of ultrasonic breast lesions can be further equivalent to the removal of speckle noise. By applying the disk expansion method to the binary image, we can obtain a significant radius-based image where the radius for each pixel represents the corresponding disk covering the specific object information. Finally, a signal transmission process is used for searching the complete breast lesion region and thus the desired lesion boundary can be effectively and automatically determined. Our algorithm can be performed iteratively until all desired objects are detected. Simulations and clinical images were introduced to

  8. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms

    Directory of Open Access Journals (Sweden)

    Konstantinos N. Topouzelis

    2008-10-01

    Full Text Available This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  9. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve.

    Science.gov (United States)

    Xu, Lili; Luo, Shuqian

    2010-01-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  10. Optimized Swinging Door Algorithm for Wind Power Ramp Event Detection: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-06

    Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.

  11. A new algorithm for detecting and correcting bad pixels in infrared images

    Directory of Open Access Journals (Sweden)

    Andrés David Restrepo Girón

    2012-10-01

    Full Text Available An image processing algorithm detects and replaces abnormal pixels individually, highlighting them amongst their neighbours in a sequence of thermal images without affecting overall texture, like classical filtering does. Bad pixels from manufacture or constant use of a CCD device in an IR camera are thus detected and replaced with a very good success rate, thereby reducing the risk of bad interpretation. Some thermal sequences from CFRP plates, taken by a Cincinnati Electronics InSb IR camera, were used for developing and testing this algorithm. The results were compared to a detailed list of bad pixels given by the manufacturer (about 70% coincidence. This work becomes relevant considering that the number of papers on this subject is low; most of them talk about astronomical image pre-processing. Moreover, thermo graphic non-destructive testing (TNDT techniques are gaining popularity in Colombia at introductory levels in industrial sectors such as energy generation and transmission, sugar production and military aeronautics.

  12. An effective detection algorithm for region duplication forgery in digital images

    Science.gov (United States)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  13. Algorithms for High-speed Generating CRC Error Detection Coding in Separated Ultra-precision Measurement

    Science.gov (United States)

    Zhi, Z.; Tan, J. B.; Huang, X. D.; Chen, F. F.

    2006-10-01

    In order to solve the contradiction between error detection, transmission rate and system resources in data transmission of ultra-precision measurement, a kind of algorithm for high-speed generating CRC code has been put forward in this paper. Theoretical formulae for calculating CRC code of 16-bit segmented data are obtained by derivation. On the basis of 16-bit segmented data formulae, Optimized algorithm for 32-bit segmented data CRC coding is obtained, which solve the contradiction between memory occupancy and coding speed. Data coding experiments are conducted triumphantly by using high-speed ARM embedded system. The results show that this method has features of high error detecting ability, high speed and saving system resources, which improve Real-time Performance and Reliability of the measurement data communication.

  14. A Network Intrusions Detection System based on a Quantum Bio Inspired Algorithm

    OpenAIRE

    Soliman, Omar S.; Rassem, Aliaa

    2014-01-01

    Network intrusion detection systems (NIDSs) have a role of identifying malicious activities by monitoring the behavior of networks. Due to the currently high volume of networks trafic in addition to the increased number of attacks and their dynamic properties, NIDSs have the challenge of improving their classification performance. Bio-Inspired Optimization Algorithms (BIOs) are used to automatically extract the the discrimination rules of normal or abnormal behavior to improve the classificat...

  15. The Invisibles: A Detection Algorithm to Trace the Faintest Milky Way Satellites

    OpenAIRE

    Walsh, Shane; Willman, Beth; Jerjen, Helmut

    2008-01-01

    [Abridged] A specialized data mining algorithm has been developed using wide-field photometry catalogues, enabling systematic and efficient searches for resolved, extremely low surface brightness satellite galaxies in the halo of the Milky Way (MW). Tested and calibrated with the Sloan Digital Sky Survey Data Release 6 (SDSS-DR6) we recover all fifteen MW satellites recently detected in SDSS, six known MW/Local Group dSphs in the SDSS footprint, and 19 previously known globular and open clust...

  16. A Universal Fast Algorithm for Sensitivity-Based Structural Damage Detection

    Directory of Open Access Journals (Sweden)

    Q. W. Yang

    2013-01-01

    Full Text Available Structural damage detection using measured response data has emerged as a new research area in civil, mechanical, and aerospace engineering communities in recent years. In this paper, a universal fast algorithm is presented for sensitivity-based structural damage detection, which can quickly improve the calculation accuracy of the existing sensitivity-based technique without any high-order sensitivity analysis or multi-iterations. The key formula of the universal fast algorithm is derived from the stiffness and flexibility matrix spectral decomposition theory. With the introduction of the key formula, the proposed method is able to quickly achieve more accurate results than that obtained by the original sensitivity-based methods, regardless of whether the damage is small or large. Three examples are used to demonstrate the feasibility and superiority of the proposed method. It has been shown that the universal fast algorithm is simple to implement and quickly gains higher accuracy over the existing sensitivity-based damage detection methods.

  17. A density based link clustering algorithm for overlapping community detection in networks

    Science.gov (United States)

    Zhou, Xu; Liu, Yanheng; Wang, Jian; Li, Chun

    2017-11-01

    Overlapping is an interesting and common characteristic of community structure in networks. Link clustering method for overlapping community detection has attracted a lot of attention in the area of social networks applications. However, it may make the clustering result with excessive overlap and cluster bridge edge and border edge mistakenly to adjacent communities. To solve this problem, a density based link clustering algorithm is proposed to improve the accuracy of detecting overlapping communities in networks in this study. It creates a number of clusters containing core edges only based on concept named as core density reachable during the expansion. Then an updating strategy for unclassified edges is designed to assign them to the closest cluster. In addition, a similarity measure for computing the similarity between two edges is presented. Experiments on synthetic networks and real networks have been conducted. The experimental results demonstrate that our method performs better than other algorithms on detecting community structure and overlapping nodes, it can get nearly 15% higher than the NMI value of other algorithms on some synthetic networks.

  18. Dramatyping: a generic algorithm for detecting reasonable temporal correlations between drug administration and lab value alterations

    Directory of Open Access Journals (Sweden)

    Axel Newe

    2016-03-01

    Full Text Available According to the World Health Organization, one of the criteria for the standardized assessment of case causality in adverse drug reactions is the temporal relationship between the intake of a drug and the occurrence of a reaction or a laboratory test abnormality. This article presents and describes an algorithm for the detection of a reasonable temporal correlation between the administration of a drug and the alteration of a laboratory value course. The algorithm is designed to process normalized lab values and is therefore universally applicable. It has a sensitivity of 0.932 for the detection of lab value courses that show changes in temporal correlation with the administration of a drug and it has a specificity of 0.967 for the detection of lab value courses that show no changes. Therefore, the algorithm is appropriate to screen the data of electronic health records and to support human experts in revealing adverse drug reactions. A reference implementation in Python programming language is available.

  19. Intrusion Detection Algorithm for Mitigating Sinkhole Attack on LEACH Protocol in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ranjeeth Kumar Sundararajan

    2015-01-01

    Full Text Available In wireless sensor network (WSN, the sensors are deployed and placed uniformly to transmit the sensed data to a centralized station periodically. So, the major threat of the WSN network layer is sinkhole attack and it is still being a challenging issue on the sensor networks, where the malicious node attracts the packets from the other normal sensor nodes and drops the packets. Thus, this paper proposes an Intrusion Detection System (IDS mechanism to detect the intruder in the network which uses Low Energy Adaptive Clustering Hierarchy (LEACH protocol for its routing operation. In the proposed algorithm, the detection metrics, such as number of packets transmitted and received, are used to compute the intrusion ratio (IR by the IDS agent. The computed numeric or nonnumeric value represents the normal or malicious activity. As and when the sinkhole attack is captured, the IDS agent alerts the network to stop the data transmission. Thus, it can be a resilient to the vulnerable attack of sinkhole. Above all, the simulation result is shown for the proposed algorithm which is proven to be efficient compared with the existing work, namely, MS-LEACH, in terms of minimum computational complexity and low energy consumption. Moreover, the algorithm was numerically analyzed using TETCOS NETSIM.

  20. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-05-01

    This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.

  1. A fast algorithm for estimating transmission probabilities in QTL detection designs with dense maps

    Directory of Open Access Journals (Sweden)

    Gilbert Hélène

    2009-11-01

    Full Text Available Abstract Background In the case of an autosomal locus, four transmission events from the parents to progeny are possible, specified by the grand parental origin of the alleles inherited by this individual. Computing the probabilities of these transmission events is essential to perform QTL detection methods. Results A fast algorithm for the estimation of these probabilities conditional to parental phases has been developed. It is adapted to classical QTL detection designs applied to outbred populations, in particular to designs composed of half and/or full sib families. It assumes the absence of interference. Conclusion The theory is fully developed and an example is given.

  2. MMSE-based algorithm for joint signal detection, channel and noise variance estimation for OFDM systems

    CERN Document Server

    Savaux, Vincent

    2014-01-01

    This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr

  3. An algorithm for detecting EMG onset/offset in trunk muscles during a reaction- stabilization test.

    Science.gov (United States)

    Jubany, Júlia; Angulo-Barroso, Rosa

    2016-04-27

    Most of the EMG analysis algorithms developed to date don't detect the whole sequence of rhythmic and subtle changes that take place during the process of trunk stabilization. Indeed, the few recent methods that are capable of assessing these important EMG characteristics are highly complex and not accessible in most applied clinic contexts. To validate and disseminate a software program suitable for detecting multiple and relatively small EMG bursts during a trunk stabilization response. Ninety EMG recordings randomly selected from 50 individuals (24 with chronic low back pain) were analysed by our algorithm based on means and standard deviations and an experienced examiner (as a gold standard). Concordance, sensitivity, specificity, positive predictive value and negative predictive value were considered to analyse reliability. Results showed a high degree of concordance between the two methods (87.2%), high sensitivity and specificity rates (79.5 and 89.2%), a moderate-low positive predicted value (66.9%) and a high negative predicted value (94.4%). The program provided is flexible and useful to detect EMG activity. The selected parameters of the program were able to detect onset/offset EMG bursts and were valid for the purpose of this study with a small tendency to over-detect bursts.

  4. An analysis of oil spill detection algorithms using laser fluorosensor data

    Energy Technology Data Exchange (ETDEWEB)

    Jha, M.N.; Gao, Y. [Calgary Univ., AB (Canada). Schulich School of Engineering, Dept. of Geomatics Engineering

    2008-07-01

    This paper analyzed various algorithms used for oil spill detection and classification. The algorithms were compared on the basis of their reliability in detecting and classifying oil using Scanning Laser Environmental Airborne Fluorosensor (SLEAF) data as well as simulated data prepared by adding Gaussian noise to the reference data. A reliable scheme was subsequently developed to accurately detect and classify oil based on this analysis. Laser fluorosensors were shown to be the best available sensor for oil spill detection and classification because they can positively distinguish oil from the background, including water, ice and soil. The information provided by sensors for oil spill contingency planning should include the location and spread of an oil spill over a large area; the thickness distribution of an oil spill to estimate the quantity of spilled oil; classification of the oil type into heavy, medium and light crude oil in order to estimate environmental damage and to take appropriate response actions; and, information to help in clean-up operations. The proposed scheme successfully detected diesel oil slicks in the area around Vancouver Island. Results were in good agreement with Environment Canada findings. The authors recommended that further research is needed to improve and verify the reliability of the proposed scheme. 21 refs., 1 tab., 13 figs.

  5. Vibration-Based Damage Detection in Beams by Cooperative Coevolutionary Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Kittipong Boonlong

    2014-03-01

    Full Text Available Vibration-based damage detection, a nondestructive method, is based on the fact that vibration characteristics such as natural frequencies and mode shapes of structures are changed when the damage happens. This paper presents cooperative coevolutionary genetic algorithm (CCGA, which is capable for an optimization problem with a large number of decision variables, as the optimizer for the vibration-based damage detection in beams. In the CCGA, a minimized objective function is a numerical indicator of differences between vibration characteristics of the actual damage and those of the anticipated damage. The damage detection in a uniform cross-section cantilever beam, a uniform strength cantilever beam, and a uniform cross-section simply supported beam is used as the test problems. Random noise in the vibration characteristics is also considered in the damage detection. In the simulation analysis, the CCGA provides the superior solutions to those that use standard genetic algorithms presented in previous works, although it uses less numbers of the generated solutions in solution search. The simulation results reveal that the CCGA can efficiently identify the occurred damage in beams for all test problems including the damage detection in a beam with a large number of divided elements such as 300 elements.

  6. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Tao Ma

    2016-10-01

    Full Text Available The development of intrusion detection systems (IDS that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC and deep neural network (DNN algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN, support vector machine (SVM, random forest (RF and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  7. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks.

    Science.gov (United States)

    Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun

    2016-10-13

    The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  8. A Comparative Study of Data Mining Algorithms for High Detection Rate in Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Nabeela Ashraf

    2018-01-01

    Full Text Available Due to the fast growth and tradition of the internet over the last decades, the network security problems are increasing vigorously. Humans can not handle the speed of processes and the huge amount of data required to handle network anomalies. Therefore, it needs substantial automation in both speed and accuracy. Intrusion Detection System is one of the approaches to recognize illegal access and rare attacks to secure networks. In this proposed paper, Naive Bayes, J48 and Random Forest classifiers are compared to compute the detection rate and accuracy of IDS. For experiments, the KDD_NSL dataset is used.

  9. Design of Cyber Attack Precursor Symptom Detection Algorithm through System Base Behavior Analysis and Memory Monitoring

    Science.gov (United States)

    Jung, Sungmo; Kim, Jong Hyun; Cagalaban, Giovanni; Lim, Ji-Hoon; Kim, Seoksoo

    More recently, botnet-based cyber attacks, including a spam mail or a DDos attack, have sharply increased, which poses a fatal threat to Internet services. At present, antivirus businesses make it top priority to detect malicious code in the shortest time possible (Lv.2), based on the graph showing a relation between spread of malicious code and time, which allows them to detect after malicious code occurs. Despite early detection, however, it is not possible to prevent malicious code from occurring. Thus, we have developed an algorithm that can detect precursor symptoms at Lv.1 to prevent a cyber attack using an evasion method of 'an executing environment aware attack' by analyzing system behaviors and monitoring memory.

  10. An Improved Topology-Potential-Based Community Detection Algorithm for Complex Network

    Directory of Open Access Journals (Sweden)

    Zhixiao Wang

    2014-01-01

    Full Text Available Topology potential theory is a new community detection theory on complex network, which divides a network into communities by spreading outward from each local maximum potential node. At present, almost all topology-potential-based community detection methods ignore node difference and assume that all nodes have the same mass. This hypothesis leads to inaccuracy of topology potential calculation and then decreases the precision of community detection. Inspired by the idea of PageRank algorithm, this paper puts forward a novel mass calculation method for complex network nodes. A node’s mass obtained by our method can effectively reflect its importance and influence in complex network. The more important the node is, the bigger its mass is. Simulation experiment results showed that, after taking node mass into consideration, the topology potential of node is more accurate, the distribution of topology potential is more reasonable, and the results of community detection are more precise.

  11. Mobile Phone Based Falling Detection Sensor and Computer-Aided Algorithm for Elderly People

    Directory of Open Access Journals (Sweden)

    Lee Jong-Ha

    2016-01-01

    Full Text Available Falls are dangerous for the elderly population; therefore many fall detection systems have been developed. However, previous methods are bulky for elderly people or only use a single sensor to isolate falls from daily living activities, which makes a fall difficult to distinguish. In this paper, we present a cost-effective and easy-to-use portable fall-detection sensor and algorithm. Specifically, to detect human falls, we used a three-axis accelerator and a three-axis gyroscope in a mobile phone. We used the Fourier descriptor-based frequency analysis method to classify both normal and falling status. From the experimental results, the proposed method detects falling status with 96.14% accuracy.

  12. Behavioral features recognition and oestrus detection based on fast approximate clustering algorithm in dairy cows

    Science.gov (United States)

    Tian, Fuyang; Cao, Dong; Dong, Xiaoning; Zhao, Xinqiang; Li, Fade; Wang, Zhonghua

    2017-06-01

    Behavioral features recognition was an important effect to detect oestrus and sickness in dairy herds and there is a need for heat detection aid. The detection method was based on the measure of the individual behavioural activity, standing time, and temperature of dairy using vibrational sensor and temperature sensor in this paper. The data of behavioural activity index, standing time, lying time and walking time were sent to computer by lower power consumption wireless communication system. The fast approximate K-means algorithm (FAKM) was proposed to deal the data of the sensor for behavioral features recognition. As a result of technical progress in monitoring cows using computers, automatic oestrus detection has become possible.

  13. A New MANET wormhole detection algorithm based on traversal time and hop count analysis.

    Science.gov (United States)

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2011-01-01

    As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability.

  14. A New MANET Wormhole Detection Algorithm Based on Traversal Time and Hop Count Analysis

    Directory of Open Access Journals (Sweden)

    Göran Pulkkis

    2011-11-01

    Full Text Available As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA, which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability.

  15. Improvement for detection of microcalcifications through clustering algorithms and artificial neural networks

    Science.gov (United States)

    Quintanilla-Domínguez, Joel; Ojeda-Magaña, Benjamín; Marcano-Cedeño, Alexis; Cortina-Januchs, María G.; Vega-Corona, Antonio; Andina, Diego

    2011-12-01

    A new method for detecting microcalcifications in regions of interest (ROIs) extracted from digitized mammograms is proposed. The top-hat transform is a technique based on mathematical morphology operations and, in this paper, is used to perform contrast enhancement of the mi-crocalcifications. To improve microcalcification detection, a novel image sub-segmentation approach based on the possibilistic fuzzy c-means algorithm is used. From the original ROIs, window-based features, such as the mean and standard deviation, were extracted; these features were used as an input vector in a classifier. The classifier is based on an artificial neural network to identify patterns belonging to microcalcifications and healthy tissue. Our results show that the proposed method is a good alternative for automatically detecting microcalcifications, because this stage is an important part of early breast cancer detection.

  16. Fast iterative censoring CFAR algorithm for ship detection from SAR images

    Science.gov (United States)

    Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng

    2017-11-01

    Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.

  17. A Distributed Algorithm for the Cluster-Based Outlier Detection Using Unsupervised Extreme Learning Machines

    Directory of Open Access Journals (Sweden)

    Xite Wang

    2017-01-01

    Full Text Available Outlier detection is an important data mining task, whose target is to find the abnormal or atypical objects from a given dataset. The techniques for detecting outliers have a lot of applications, such as credit card fraud detection and environment monitoring. Our previous work proposed the Cluster-Based (CB outlier and gave a centralized method using unsupervised extreme learning machines to compute CB outliers. In this paper, we propose a new distributed algorithm for the CB outlier detection (DACB. On the master node, we collect a small number of points from the slave nodes to obtain a threshold. On each slave node, we design a new filtering method that can use the threshold to efficiently speed up the computation. Furthermore, we also propose a ranking method to optimize the order of cluster scanning. At last, the effectiveness and efficiency of the proposed approaches are verified through a plenty of simulation experiments.

  18. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    Science.gov (United States)

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  19. Leads Detection Using Mixture Statistical Distribution Based CRF Algorithm from Sentinel-1 Dual Polarization SAR Imagery

    Science.gov (United States)

    Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting

    2017-04-01

    Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a

  20. Breadth-first search-based single-phase algorithms for bridge detection in wireless sensor networks.

    Science.gov (United States)

    Akram, Vahid Khalilpour; Dagdeviren, Orhan

    2013-07-10

    Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times.

  1. Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    Science.gov (United States)

    Matthews, Bryan L.; Srivastava, Ashok N.

    2010-01-01

    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.

  2. A novel seizure detection algorithm informed by hidden Markov model event states

    Science.gov (United States)

    Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian

    2016-06-01

    Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h-1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.

  3. Motion Mode Recognition and Step Detection Algorithms for Mobile Phone Users

    Science.gov (United States)

    Susi, Melania; Renaudin, Valérie; Lachapelle, Gérard

    2013-01-01

    Microelectromechanical Systems (MEMS) technology is playing a key role in the design of the new generation of smartphones. Thanks to their reduced size, reduced power consumption, MEMS sensors can be embedded in above mobile devices for increasing their functionalities. However, MEMS cannot allow accurate autonomous location without external updates, e.g., from GPS signals, since their signals are degraded by various errors. When these sensors are fixed on the user's foot, the stance phases of the foot can easily be determined and periodic Zero velocity UPdaTes (ZUPTs) are performed to bound the position error. When the sensor is in the hand, the situation becomes much more complex. First of all, the hand motion can be decoupled from the general motion of the user. Second, the characteristics of the inertial signals can differ depending on the carrying modes. Therefore, algorithms for characterizing the gait cycle of a pedestrian using a handheld device have been developed. A classifier able to detect motion modes typical for mobile phone users has been designed and implemented. According to the detected motion mode, adaptive step detection algorithms are applied. Success of the step detection process is found to be higher than 97% in all motion modes. PMID:23348038

  4. Motion Mode Recognition and Step Detection Algorithms for Mobile Phone Users

    Directory of Open Access Journals (Sweden)

    Gérard Lachapelle

    2013-01-01

    Full Text Available Microelectromechanical Systems (MEMS technology is playing a key role in the design of the new generation of smartphones. Thanks to their reduced size, reduced power consumption, MEMS sensors can be embedded in above mobile devices for increasing their functionalities. However, MEMS cannot allow accurate autonomous location without external updates, e.g., from GPS signals, since their signals are degraded by various errors. When these sensors are fixed on the user’s foot, the stance phases of the foot can easily be determined and periodic Zero velocity UPdaTes (ZUPTs are performed to bound the position error. When the sensor is in the hand, the situation becomes much more complex. First of all, the hand motion can be decoupled from the general motion of the user. Second, the characteristics of the inertial signals can differ depending on the carrying modes. Therefore, algorithms for characterizing the gait cycle of a pedestrian using a handheld device have been developed. A classifier able to detect motion modes typical for mobile phone users has been designed and implemented. According to the detected motion mode, adaptive step detection algorithms are applied. Success of the step detection process is found to be higher than 97% in all motion modes.

  5. Motion mode recognition and step detection algorithms for mobile phone users.

    Science.gov (United States)

    Susi, Melania; Renaudin, Valérie; Lachapelle, Gérard

    2013-01-24

    Microelectromechanical Systems (MEMS) technology is playing a key role in the design of the new generation of smartphones. Thanks to their reduced size, reduced power consumption, MEMS sensors can be embedded in above mobile devices for increasing their functionalities. However, MEMS cannot allow accurate autonomous location without external updates, e.g., from GPS signals, since their signals are degraded by various errors. When these sensors are fixed on the user's foot, the stance phases of the foot can easily be determined and periodic Zero velocity UPdaTes (ZUPTs) are performed to bound the position error. When the sensor is in the hand, the situation becomes much more complex. First of all, the hand motion can be decoupled from the general motion of the user. Second, the characteristics of the inertial signals can differ depending on the carrying modes. Therefore, algorithms for characterizing the gait cycle of a pedestrian using a handheld device have been developed. A classifier able to detect motion modes typical for mobile phone users has been designed and implemented. According to the detected motion mode, adaptive step detection algorithms are applied. Success of the step detection process is found to be higher than 97% in all motion modes.

  6. A stationary wavelet transform and a time-frequency based spike detection algorithm for extracellular recorded data

    Science.gov (United States)

    Lieb, Florian; Stark, Hans-Georg; Thielemann, Christiane

    2017-06-01

    Objective. Spike detection from extracellular recordings is a crucial preprocessing step when analyzing neuronal activity. The decision whether a specific part of the signal is a spike or not is important for any kind of other subsequent preprocessing steps, like spike sorting or burst detection in order to reduce the classification of erroneously identified spikes. Many spike detection algorithms have already been suggested, all working reasonably well whenever the signal-to-noise ratio is large enough. When the noise level is high, however, these algorithms have a poor performance. Approach. In this paper we present two new spike detection algorithms. The first is based on a stationary wavelet energy operator and the second is based on the time-frequency representation of spikes. Both algorithms are more reliable than all of the most commonly used methods. Main results. The performance of the algorithms is confirmed by using simulated data, resembling original data recorded from cortical neurons with multielectrode arrays. In order to demonstrate that the performance of the algorithms is not restricted to only one specific set of data, we also verify the performance using a simulated publicly available data set. We show that both proposed algorithms have the best performance under all tested methods, regardless of the signal-to-noise ratio in both data sets. Significance. This contribution will redound to the benefit of electrophysiological investigations of human cells. Especially the spatial and temporal analysis of neural network communications is improved by using the proposed spike detection algorithms.

  7. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    Science.gov (United States)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  8. Agent-based algorithm for fault detection and recovery of gyroscope's drift in small satellite missions

    Science.gov (United States)

    Carvajal-Godinez, Johan; Guo, Jian; Gill, Eberhard

    2017-10-01

    Failure detection, isolation, and recovery is an essential requirement of any space mission design. Several spacecraft components, especially sensors, are prone to performance deviation due to intrinsic physical effects. For that reason, innovative approaches for the treatment of faults in onboard sensors are necessary. This work introduces the concept of agent-based fault detection and recovery for sensors used in satellite attitude determination and control. Its focuses on the implementation of an algorithm for addressing linear drift bias in gyroscopes. The algorithm was implemented using an agent-based architecture that can be integrated into the satellite's onboard software. Numerical simulations were carried out to show the effectiveness of this scheme in satellite's operations. The proposed algorithm showed a reduction of up to 50% in the stabilization time for the detumbling maneuver, and also an improvement in the pointing accuracy of up to 20% when it was applied in precise payload pointing procedures. The relevance of this contribution is its added value for optimizing the launch and early operation of small satellite missions, as well as, an enabler for innovative satellite functions, for instance, optical downlink communication.

  9. Damage detection and localization algorithm using a dense sensor network of thin film sensors

    Science.gov (United States)

    Downey, Austin; Ubertini, Filippo; Laflamme, Simon

    2017-04-01

    The authors have recently proposed a hybrid dense sensor network consisting of a novel, capacitive-based thin-film electronic sensor for monitoring strain on mesosurfaces and fiber Bragg grating sensors for enforcing boundary conditions on the perimeter of the monitored area. The thin-film sensor monitors local strain over a global area through transducing a change in strain into a change in capacitance. In the case of bidirectional in-plane strain, the sensor output contains the additive measurement of both principal strain components. When combined with the mature technology of fiber Bragg grating sensors, the hybrid dense sensor network shows potential for the monitoring of mesoscale systems. In this paper, we present an algorithm for the detection, quantification, and localization of strain within a hybrid dense sensor network. The algorithm leverages the advantages of a hybrid dense sensor network for the monitoring of large scale systems. The thin film sensor is used to monitor strain over a large area while the fiber Bragg grating sensors are used to enforce the uni-directional strain along the perimeter of the hybrid dense sensor network. Orthogonal strain maps are reconstructed by assuming different bidirectional shape functions and are solved using the least squares estimator to reconstruct the planar strain maps within the hybrid dense sensor network. Error between the estimated strain maps and measured strains is extracted to derive damage detecting features, dependent on the selected shape functions. Results from numerical simulations show good performance of the proposed algorithm.

  10. Enhanced specificity of a dual chamber ICD arrhythmia detection algorithm by rate stability criteria.

    Science.gov (United States)

    Mletzko, Ralph; Anselme, Frederic; Klug, Didier; Schoels, Wolfgang; Bowes, Robert; Iscolo, Nicolas; Nitzsché, Rémi; Sadoul, Nicolas

    2004-08-01

    Inappropriate therapy remains an important limitation of implantable cardioverter defibrillators (ICD). PARAD+ was developed to increase the specificity conferred by the original PARAD detection algorithm in the detection of atrial fibrillation (AF). To compare the performances of the two different algorithms, we retrospectively analyzed all spontaneous and sustained episodes of AF and ventricular tachycardia (VT) documented by state-of-the-art ICDs programmed with PARAD or PARAD+ at the physicians' discretion. The results were stratified according to tachycardia rates or =150 beats/min. The study included 329 men and 48 women (64 +/- 10 years of age). PARAD was programmed in 263, and PARAD+ in 84 devices. During a mean follow-up of 11 +/- 3 months, 1,019 VT and 315 AF episodes were documented among 338 devices. For tachycardias with ventricular rates <150 beats/min, the sensitivity of PARAD versus PARAD+ was 96% versus 99% (NS), specificity 80% versus 93% (P < 0.002), positive predictive value (PPV) 94% versus 91% (NS), and negative predictive value (NPV) 86% versus 99% (P < 0.0001). In contrast, in the fast VT zone, the specificity and PPV of PARAD (95% versus 84% and 100% versus 96%) were higher than those of PARAD+ (NS, P < 0.001). Among 23 AF episodes treated in 16 patients, 3 episodes triggered an inappropriate shock in 3 patients, all in the PARAD population. PARAD+ significantly increased the ICD algorithm diagnostic specificity and NPV for AF in the slow VT zone without compromising patient safety.

  11. Adaptive switching detection algorithm for iterative-MIMO systems to enable power savings

    Science.gov (United States)

    Tadza, N.; Laurenson, D.; Thompson, J. S.

    2014-11-01

    This paper attempts to tackle one of the challenges faced in soft input soft output Multiple Input Multiple Output (MIMO) detection systems, which is to achieve optimal error rate performance with minimal power consumption. This is realized by proposing a new algorithm design that comprises multiple thresholds within the detector that, in real time, specify the receiver behavior according to the current channel in both slow and fast fading conditions, giving it adaptivity. This adaptivity enables energy savings within the system since the receiver chooses whether to accept or to reject the transmission, according to the success rate of detecting thresholds. The thresholds are calculated using the mutual information of the instantaneous channel conditions between the transmitting and receiving antennas of iterative-MIMO systems. In addition, the power saving technique, Dynamic Voltage and Frequency Scaling, helps to reduce the circuit power demands of the adaptive algorithm. This adaptivity has the potential to save up to 30% of the total energy when it is implemented on Xilinx®Virtex-5 simulation hardware. Results indicate the benefits of having this "intelligence" in the adaptive algorithm due to the promising performance-complexity tradeoff parameters in both software and hardware codesign simulation.

  12. Comprehensive bearing condition monitoring algorithm for incipient fault detection using acoustic emission

    Directory of Open Access Journals (Sweden)

    Amit R. Bhende

    2014-09-01

    Full Text Available The bearing reliability plays major role in obtaining the desired performance of any machine. A continuous condition monitoring of machine is required in certain applications where failure of machine leads to loss of production, human safety and precision. Machine faults are often linked to the bearing faults. Condition monitoring of machine involves continuous watch on the performance of bearings and predicting the faults of bearing before it cause any adversity. This paper investigates an experimental study to diagnose the fault while bearing is in operation. An acoustic emission technique is used in the experimentation. An algorithm is developed to process various types of signals generated from different bearing defects. The algorithm uses time domain analysis along with combination low frequency analysis technique such as fast Fourier transform and high frequency envelope detection. Two methods have adopted for envelope detection which are Hilbert transform and order analysis. Experimental study is carried out for deep groove ball bearing cage defect. Results show the potential effectiveness of the proposed algorithm to determine presence of fault, exact location and severity of fault.

  13. Comparison between genetic algorithm and self organizing map to detect botnet network traffic

    Science.gov (United States)

    Yugandhara Prabhakar, Shinde; Parganiha, Pratishtha; Madhu Viswanatham, V.; Nirmala, M.

    2017-11-01

    In Cyber Security world the botnet attacks are increasing. To detect botnet is a challenging task. Botnet is a group of computers connected in a coordinated fashion to do malicious activities. Many techniques have been developed and used to detect and prevent botnet traffic and the attacks. In this paper, a comparative study is done on Genetic Algorithm (GA) and Self Organizing Map (SOM) to detect the botnet network traffic. Both are soft computing techniques and used in this paper as data analytics system. GA is based on natural evolution process and SOM is an Artificial Neural Network type, uses unsupervised learning techniques. SOM uses neurons and classifies the data according to the neurons. Sample of KDD99 dataset is used as input to GA and SOM.

  14. An algorithm for the detection of move repetition without the use of hash-keys

    Directory of Open Access Journals (Sweden)

    Vučković Vladan

    2007-01-01

    Full Text Available This paper addresses the theoretical and practical aspects of an important problem in computer chess programming - the problem of draw detection in cases of position repetition. The standard approach used in the majority of computer chess programs is hash-oriented. This method is sufficient in most cases, as the Zobrist keys are already present due to the systemic positional hashing, so that they need not be computed anew for the purpose of draw detection. The new type of the algorithm that we have developed solves the problem of draw detection in cases when Zobrist keys are not used in the program, i.e. in cases when the memory is not hashed.

  15. JACoW Model learning algorithms for anomaly detection in CERN control systems

    CERN Document Server

    Tilaro, Filippo; Gonzalez-Berges, Manuel; Roshchin, Mikhail; Varela, Fernando

    2018-01-01

    The CERN automation infrastructure consists of over 600 heterogeneous industrial control systems with around 45 million deployed sensors, actuators and control objects. Therefore, it is evident that the monitoring of such huge system represents a challenging and complex task. This paper describes three different mathematical approaches that have been designed and developed to detect anomalies in any of the CERN control systems. Specifically, one of these algorithms is purely based on expert knowledge; the other two mine the historical generated data to create a simple model of the system; this model is then used to detect faulty sensors measurements. The presented methods can be categorized as dynamic unsupervised anomaly detection; “dynamic” since the behaviour of the system and the evolution of its attributes are observed and changing in time. They are “unsupervised” because we are trying to predict faulty events without examples in the data history. So, the described strategies involve monitoring t...

  16. Efficiencies of Inhomogeneity-Detection Algorithms: Comparison of Different Detection Methods and Efficiency Measures

    Directory of Open Access Journals (Sweden)

    Peter Domonkos

    2013-01-01

    Full Text Available Efficiency evaluations for change point Detection methods used in nine major Objective Homogenization Methods (DOHMs are presented. The evaluations are conducted using ten different simulated datasets and four efficiency measures: detection skill, skill of linear trend estimation, sum of squared error, and a combined efficiency measure. Test datasets applied have a diverse set of inhomogeneity (IH characteristics and include one dataset that is similar to the monthly benchmark temperature dataset of the European benchmarking effort known by the acronym COST HOME. The performance of DOHMs is highly dependent on the characteristics of test datasets and efficiency measures. Measures of skills differ markedly according to the frequency and mean duration of inhomogeneities and vary with the ratio of IH-magnitudes and background noise. The study focuses on cases when high quality relative time series (i.e., the difference between a candidate and reference series can be created, but the frequency and intensity of inhomogeneities are high. Results show that in these cases the Caussinus-Mestre method is the most effective, although appreciably good results can also be achieved by the use of several other DOHMs, such as the Multiple Analysis of Series for Homogenisation, Bayes method, Multiple Linear Regression, and the Standard Normal Homogeneity Test.

  17. An efficient sampling algorithm for uncertain abnormal data detection in biomedical image processing and disease prediction.

    Science.gov (United States)

    Liu, Fei; Zhang, Xi; Jia, Yan

    2015-01-01

    In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.

  18. Topic extraction method using RED-NMF algorithm for detecting outbreak of some disease on Twitter

    Science.gov (United States)

    Iskandar, Afif Akbar

    2017-03-01

    Indonesia is one of the biggest user of social media, this can be useful for detecting a popular trend in an era, including some disease outbreak that we get from topic extracting method, e.g.: NMF. However, the texts that were spread was an unstructured text which needs to be cleaned before being processed. One of the cleaning methods of texts is using regular expression. However, data texts from social media have a lot of variations, which means that regular expression that were being made have to adapt each data that will be cleaned, hence, we need an algorithm to "learn" the form of texts that need to be cleaned. In this paper, we purposed a framework for cleaning and extracting topic from Twitter data called RED-NMF, feature extraction and filtering method based on regular expression discovery algorithm for data cleaning and non-negative matrix factorization for extract the topic.

  19. Demyelinating and ischemic brain diseases: detection algorithm through regular magnetic resonance images

    Science.gov (United States)

    Castillo, D.; Samaniego, René; Jiménez, Y.; Cuenca, L.; Vivanco, O.; Rodríguez-Álvarez, M. J.

    2017-09-01

    This work presents the advance to development of an algorithm for automatic detection of demyelinating lesions and cerebral ischemia through magnetic resonance images, which have contributed in paramount importance in the diagnosis of brain diseases. The sequences of images to be used are T1, T2, and FLAIR. Brain demyelination lesions occur due to damage of the myelin layer of nerve fibers; and therefore this deterioration is the cause of serious pathologies such as multiple sclerosis (MS), leukodystrophy, disseminated acute encephalomyelitis. Cerebral or cerebrovascular ischemia is the interruption of the blood supply to the brain, thus interrupting; the flow of oxygen and nutrients needed to maintain the functioning of brain cells. The algorithm allows the differentiation between these lesions.

  20. Development of fog detection algorithm using Himawari-8/AHI data at daytime

    Science.gov (United States)

    Han, Ji-Hye; Kim, So-Hyeong; suh, Myoung-Seok

    2017-04-01

    Fog is defined that small cloud water drops or ice particles float in the air and visibility is less than 1 km. In general, fog affects ecological system, radiation budget and human activities such as airplane, ship, and car. In this study, we developed a fog detection algorithm (FDA) consisted of four threshold tests of optical and textual properties of fog using satellite and ground observation data at daytime. For the detection of fog, we used satellite data (Himawari-8/AHI data) and other ancillary data such as air temperature from NWP data (over land), SST from OSTIA (over sea). And for validation, ground observed visibility data from KMA. The optical and textual properties of fog are normalized albedo (NAlb) and normalized local standard deviation (NLSD), respectively. In addition, differences between air temperature (SST) and fog top temperature (FTa(S)) are applied to discriminate the fog from low clouds. And post-processing is performed to detect the fog edge based on spatial continuity of fog. Threshold values for each test are determined by optimization processes based on the ROC analysis for the selected fog cases. Fog detection is performed according to solar zenith angle (SZA) because of the difference of available satellite data. In this study, we defined daytime when SZA is less than 85˚ . Result of FDA is presented by probability (0 ˜ 100 %) of fog through the weighted sum of each test result. The validation results with ground observed visibility data showed that POD and FAR are 0.63 ˜ 0.89 and 0.29 ˜ 0.46 according to the fog intensity and type, respectively. In general, the detection skills are better in the cases of intense and without high clouds than localized and weak fog. We are plan to transfer this algorithm to the National Meteorological Satellite Center of KMA for the operational detection of fog using GK-2A/AMI data which will be launched in 2018.

  1. Change Detection from differential airborne LiDAR using a weighted Anisotropic Iterative Closest Point Algorithm

    Science.gov (United States)

    Zhang, X.; Kusari, A.; Glennie, C. L.; Oskin, M. E.; Hinojosa-Corona, A.; Borsa, A. A.; Arrowsmith, R.

    2013-12-01

    Differential LiDAR (Light Detection and Ranging) from repeated surveys has recently emerged as an effective tool to measure three-dimensional (3D) change for applications such as quantifying slip and spatially distributed warping associated with earthquake ruptures, and examining the spatial distribution of beach erosion after hurricane impact. Currently, the primary method for determining 3D change is through the use of the iterative closest point (ICP) algorithm and its variants. However, all current studies using ICP have assumed that all LiDAR points in the compared point clouds have uniform accuracy. This assumption is simplistic given that the error for each LiDAR point is variable, and dependent upon highly variable factors such as target range, angle of incidence, and aircraft trajectory accuracy. Therefore, to rigorously determine spatial change, it would be ideal to model the random error for every LiDAR observation in the differential point cloud, and use these error estimates as apriori weights in the ICP algorithm. To test this approach, we implemented a rigorous LiDAR observation error propagation method to generate estimated random error for each point in a LiDAR point cloud, and then determine 3D displacements between two point clouds using an anistropic weighted ICP algorithm. The algorithm was evaluated by qualitatively and quantitatively comparing post earthquake slip estimates from the 2010 El Mayor-Cucapah Earthquake between a uniform weight and anistropically weighted ICP algorithm, using pre-event LiDAR collected in 2006 by Instituto Nacional de Estadística y Geografía (INEGI), and post-event LiDAR collected by The National Center for Airborne Laser Mapping (NCALM).

  2. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-05-01

    Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.

  3. A study on the influence of the particle packing fraction on the performance of a multilevel contact detection algorithm

    NARCIS (Netherlands)

    Ogarko, V.; Ogarko, V.; Luding, Stefan; Onate, E; Owen, D.R.J

    2011-01-01

    We investigate the influence of the packing fraction of highly polydisperse particle systems on the performance of a high-performance multilevel contact detection algorithm as applied for molecular dynamics type simulations. For best performance, this algorithm requires two or more hierarchy levels

  4. A multi-agent genetic algorithm for community detection in complex networks

    Science.gov (United States)

    Li, Zhangtao; Liu, Jing

    2016-05-01

    Complex networks are popularly used to represent a lot of practical systems in the domains of biology and sociology, and the structure of community is one of the most important network attributes which has received an enormous amount of attention. Community detection is the process of discovering the community structure hidden in complex networks, and modularity Q is one of the best known quality functions measuring the quality of communities of networks. In this paper, a multi-agent genetic algorithm, named as MAGA-Net, is proposed to optimize modularity value for the community detection. An agent, coded by a division of a network, represents a candidate solution. All agents live in a lattice-like environment, with each agent fixed on a lattice point. A series of operators are designed, namely split and merging based neighborhood competition operator, hybrid neighborhood crossover, adaptive mutation and self-learning operator, to increase modularity value. In the experiments, the performance of MAGA-Net is validated on both well-known real-world benchmark networks and large-scale synthetic LFR networks with 5000 nodes. The systematic comparisons with GA-Net and Meme-Net show that MAGA-Net outperforms these two algorithms, and can detect communities with high speed, accuracy and stability.

  5. Evaluation of Intrinsic Image Algorithms to Detect the Shadows Cast by Static Objects Outdoors

    Science.gov (United States)

    Isaza, Cesar; Salas, Joaquín; Raducanu, Bogdan

    2012-01-01

    In some automatic scene analysis applications, the presence of shadows becomes a nuisance that is necessary to deal with. As a consequence, a preliminary stage in many computer vision algorithms is to attenuate their effect. In this paper, we focus our attention on the detection of shadows cast by static objects outdoors, as the scene is viewed for extended periods of time (days, weeks) from a fixed camera and considering daylight intervals where the main source of light is the sun. In this context, we report two contributions. First, we introduce the use of synthetic images for which ground truth can be generated automatically, avoiding the tedious effort of manual annotation. Secondly, we report a novel application of the intrinsic image concept to the automatic detection of shadows cast by static objects in outdoors. We make both a quantitative and a qualitative evaluation of several algorithms based on this image representation. For the quantitative evaluation, we used the synthetic data set, while for the qualitative evaluation we used both data sets. Our experimental results show that the evaluated methods can partially solve the problem of shadow detection. PMID:23201998

  6. Evaluation of Intrinsic Image Algorithms to Detect the Shadows Cast by Static Objects Outdoors

    Directory of Open Access Journals (Sweden)

    Cesar Isaza

    2012-10-01

    Full Text Available In some automatic scene analysis applications, the presence of shadows becomes a nuisance that is necessary to deal with. As a consequence, a preliminary stage in many computer vision algorithms is to attenuate their effect. In this paper, we focus our attention on the detection of shadows cast by static objects outdoors, as the scene is viewed for extended periods of time (days, weeks from a fixed camera and considering daylight intervals where the main source of light is the sun. In this context, we report two contributions. First, we introduce the use of synthetic images for which ground truth can be generated automatically, avoiding the tedious effort of manual annotation. Secondly, we report a novel application of the intrinsic image concept to the automatic detection of shadows cast by static objects in outdoors. We make both a quantitative and a qualitative evaluation of several algorithms based on this image representation. For the quantitative evaluation, we used the synthetic data set, while for the qualitative evaluation we used both data sets. Our experimental results show that the evaluated methods can partially solve the problem of shadow detection.

  7. A Novel Short-Time Fourier Transform-Based Fall Detection Algorithm Using 3-Axis Accelerations

    Directory of Open Access Journals (Sweden)

    Isu Shin

    2015-01-01

    Full Text Available The short-time Fourier transform- (STFT- based algorithm was suggested to distinguish falls from various activities of daily living (ADLs. Forty male subjects volunteered in the experiments including three types of falls and four types of ADLs. An inertia sensor unit attached to the middle of two anterior superior iliac spines was used to measure the 3-axis accelerations at 100 Hz. The measured accelerations were transformed to signal vector magnitude values to be analyzed using STFT. The powers of low frequency components were extracted, and the fall detection was defined as whether the normalized power was less than the threshold (50% of the normal power. Most power was observed at the frequency band lower than 5 Hz in all activities, but the dramatic changes in the power were found only in falls. The specificity of 1–3 Hz frequency components was the best (100%, but the sensitivity was much smaller compared with 4 Hz component. The 4 Hz component showed the best fall detection with 96.9% sensitivity and 97.1% specificity. We believe that the suggested algorithm based on STFT would be useful in the fall detection and the classification from ADLs as well.

  8. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones.

    Science.gov (United States)

    Kang, Xiaomin; Huang, Baoqi; Qi, Guodong

    2018-01-19

    Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.

  9. Computationally efficient algorithm for photoplethysmography-based atrial fibrillation detection using smartphones.

    Science.gov (United States)

    Schack, Tim; Safi Harb, Yosef; Muma, Michael; Zoubir, Abdelhak M

    2017-07-01

    Atrial fibrillation (AF) is one of the major causes of stroke, heart failure, sudden death, and cardiovascular morbidity and the most common type of arrhythmia. Its diagnosis and the initiation of treatment, however, currently requires electrocardiogram (ECG)-based heart rhythm monitoring. The photoplethysmogram (PPG) offers an alternative method, which is convenient in terms of its recording and allows for self-monitoring, thus relieving clinical staff and enabling early AF diagnosis. We introduce a PPG-based AF detection algorithm using smartphones that has a low computational cost and low memory requirements. In particular, we propose a modified PPG signal acquisition, explore new statistical discriminating features and propose simple classification equations by using sequential forward selection (SFS) and support vector machines (SVM). The algorithm is applied to clinical data and evaluated in terms of receiver operating characteristic (ROC) curve and statistical measures. The combination of Shannon entropy and the median of the peak rise height achieves perfect detection of AF on the recorded data, highlighting the potential of PPG for reliable AF detection.

  10. Solar Power Ramp Events Detection Using an Optimized Swinging Door Algorithm: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-07

    Solar power ramp events (SPREs) are those that significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.

  11. Study on Immune Relevant Vector Machine Based Intelligent Fault Detection and Diagnosis Algorithm

    Directory of Open Access Journals (Sweden)

    Zhong-hua Miao

    2013-01-01

    Full Text Available An immune relevant vector machine (IRVM based intelligent classification method is proposed by combining the random real-valued negative selection (RRNS algorithm and the relevant vector machine (RVM algorithm. The method proposed is aimed to handle the training problem of missing or incomplete fault sampling data and is inspired by the “self/nonself” recognition principle in the artificial immune systems. The detectors, generated by the RRNS, are treated as the “nonself” training samples and used to train the RVM model together with the “self” training samples. After the training succeeds, the “nonself” detection model, which requires only the “self” training samples, is obtained for the fault detection and diagnosis. It provides a general way solving the problems of this type and can be applied for both fault detection and fault diagnosis. The standard Fisher's Iris flower dataset is used to experimentally testify the proposed method, and the results are compared with those from the support vector data description (SVDD method. Experimental results have shown the validity and practicability of the proposed method.

  12. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones

    Directory of Open Access Journals (Sweden)

    Xiaomin Kang

    2018-01-01

    Full Text Available Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D angular velocities of a smartphone through FFT (fast Fourier transform and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.

  13. Multiple Kernel Learning for Heterogeneous Anomaly Detection: Algorithm and Aviation Safety Case Study

    Science.gov (United States)

    Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.

    2010-01-01

    The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods

  14. CK-LPA: Efficient community detection algorithm based on label propagation with community kernel

    Science.gov (United States)

    Lin, Zhen; Zheng, Xiaolin; Xin, Nan; Chen, Deren

    2014-12-01

    With the rapid development of Web 2.0 and the rise of online social networks, finding community structures from user data has become a hot topic in network analysis. Although research achievements are numerous at present, most of these achievements cannot be adopted in large-scale social networks because of heavy computation. Previous studies have shown that label propagation is an efficient means to detect communities in social networks and is easy to implement; however, some drawbacks, such as low accuracy, high randomness, and the formation of a “monster” community, have been found. In this study, we propose an efficient community detection method based on the label propagation algorithm (LPA) with community kernel (CK-LPA). We assign a corresponding weight to each node according to node importance in the whole network and update node labels in sequence based on weight. Then, we discuss the composition of weights, the label updating strategy, the label propagation strategy, and the convergence conditions. Compared with the primitive LPA, existing drawbacks are solved by CK-LPA. Experiments and benchmarks reveal that our proposed method sustains nearly linear time complexity and exhibits significant improvements in the quality aspect of static community detection. Hence, the algorithm can be applied in large-scale social networks.

  15. A sensitive algorithm for automatic detection of space-time alternating signals in cardiac tissue.

    Science.gov (United States)

    Jia, Zhiheng; Bien, Harold; Entcheva, Emilia

    2008-01-01

    Alternans, a beat-to-beat alternation in cardiac signals, may serve as a precursor to lethal cardiac arrhythmias, including ventricular tachycardia and ventricular fibrillation. Therefore, alternans is a desirable target of early arrhythmia prediction/detection. For long-term records and in the presence of noise, the definition of alternans is qualitative and ambiguous. This makes their automatic detection in large spatiotemporal data sets almost impossible. We present here a quantitative combinatorics-derived definition of alternans in the presence of random noise and a novel algorithm for automatic alternans detection using criteria like temporal persistence (TP), representative phase (RP) and alternans ratio (AR). This technique is validated by comparison to theoretically-derived probabilities and by test data sets with white noise. Finally, the algorithm is applied to ultra-high resolution optical mapping data from cultured cell monolayers, exhibiting calcium alternans. Early fine-scale alternans, close to the noise level, were revealed and linked to the later formation of larger regions and evolution of spatially discordant alternans (SDA). This robust new technique can be useful in quantification and better understanding of the onset of arrhythmias and in general analysis of space-time alternating signals.

  16. A Joint-optimized Real-time Target Detection Algorithm for Passive Radar

    Directory of Open Access Journals (Sweden)

    Zhao Yong-ke

    2015-01-01

    Full Text Available Passive radar exploits an external illuminator signal to detect targets. It has the advantages of silence, anti-interference, and counter-stealth ability. In most cases, direct and multipath clutters should be suppressed first. Then coherent detection can be made by performing a cross-ambiguity function of the remaining target echoes and the reference signal. However, under a wide-band signal, a long-integration time, or multi-beam circumstances, a large number of computations and amount of memory is required for normal processing. This paper expresses the mathematical relationships of clutter suppression algorithms based on the Minimum Mean Square Error (MMSE principle and coherent detection algorithms based on the cross-ambiguity function. Herein, a joint-optimize and processing method is presented. This method reduces the number of computations and amount of memory required, is easy to implement on GPU devices such as CUDA, and will be useful for engineering applications. Its high-efficiency and real-time properties are validated in the experimental results.

  17. GPU implementation of target and anomaly detection algorithms for remotely sensed hyperspectral image analysis

    Science.gov (United States)

    Paz, Abel; Plaza, Antonio

    2010-08-01

    Automatic target and anomaly detection are considered very important tasks for hyperspectral data exploitation. These techniques are now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly and target detection in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we describe several new GPU-based implementations of target and anomaly detection algorithms for hyperspectral data exploitation. The parallel algorithms are implemented on latest-generation Tesla C1060 GPU architectures, and quantitatively evaluated using hyperspectral data collected by NASA's AVIRIS system over the World Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex.

  18. An algorithm for detecting Trichodesmium surface blooms in the South Western Tropical Pacific

    Directory of Open Access Journals (Sweden)

    Y. Dandonneau

    2011-12-01

    Full Text Available Trichodesmium, a major colonial cyanobacterial nitrogen fixer, forms large blooms in NO3-depleted tropical oceans and enhances CO2 sequestration by the ocean due to its ability to fix dissolved dinitrogen. Thus, its importance in C and N cycles requires better estimates of its distribution at basin to global scales. However, existing algorithms to detect them from satellite have not yet been successful in the South Western Tropical Pacific (SP. Here, a novel algorithm (TRICHOdesmium SATellite based on radiance anomaly spectra (RAS observed in SeaWiFS imagery, is used to detect Trichodesmium during the austral summertime in the SP (5° S–25° S 160° E–170° W. Selected pixels are characterized by a restricted range of parameters quantifying RAS spectra (e.g. slope, intercept, curvature. The fraction of valid (non-cloudy pixels identified as Trichodesmium surface blooms in the region is low (between 0.01 and 0.2 %, but is about 100 times higher than deduced from previous algorithms. At daily scales in the SP, this fraction represents a total ocean surface area varying from 16 to 48 km2 in Winter and from 200 to 1000 km2 in Summer (and at monthly scale, from 500 to 1000 km2 in Winter and from 3100 to 10 890 km2 in Summer with a maximum of 26 432 km2 in January 1999. The daily distribution of Trichodesmium surface accumulations in the SP detected by TRICHOSAT is presented for the period 1998–2010 which demonstrates that the number of selected pixels peaks in November–February each year, consistent with field observations. This approach was validated with in situ observations of Trichodesmium surface accumulations in the Melanesian archipelago around New Caledonia, Vanuatu and Fiji Islands for the same period.

  19. The development of the line-scan image recognition algorithm for the detection of frass on mature tomatoes

    Science.gov (United States)

    Yang, Chun-Chieh; Kim, Moon S.; Millner, Pat; Chao, Kuanglin; Chan, Diane E.

    2012-05-01

    In this research, a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at two wavebands, 664 nm and 690 nm, for computation of the simple ratio function for effective detection of frass contamination. The contamination spots were created on the tomato surfaces using four concentrations of aqueous frass dilutions. The algorithms could detect more than 99% of the 0.2 g/ml and 0.1 g/ml frass contamination spots and successfully differentiated these spots from clean tomato surfaces. The results demonstrated that the simple multispectral fluorescence imaging algorithms based on violet LED excitation can be appropriate to detect frass on tomatoes in high-speed post-harvest processing lines.

  20. An improved algorithm for automatic detection of saccades in eye movement data and for calculating saccade parameters.

    Science.gov (United States)

    Behrens, F; Mackeben, M; Schröder-Preikschat, W

    2010-08-01

    This analysis of time series of eye movements is a saccade-detection algorithm that is based on an earlier algorithm. It achieves substantial improvements by using an adaptive-threshold model instead of fixed thresholds and using the eye-movement acceleration signal. This has four advantages: (1) Adaptive thresholds are calculated automatically from the preceding acceleration data for detecting the beginning of a saccade, and thresholds are modified during the saccade. (2) The monotonicity of the position signal during the saccade, together with the acceleration with respect to the thresholds, is used to reliably determine the end of the saccade. (3) This allows differentiation between saccades following the main-sequence and non-main-sequence saccades. (4) Artifacts of various kinds can be detected and eliminated. The algorithm is demonstrated by applying it to human eye movement data (obtained by EOG) recorded during driving a car. A second demonstration of the algorithm detects microsleep episodes in eye movement data.

  1. Target detection in diagnostic ultrasound: Evaluation of a method based on the CLEAN algorithm.

    Science.gov (United States)

    Masoom, Hassan; Adve, Raviraj S; Cobbold, Richard S C

    2013-02-01

    A technique is proposed for the detection of abnormalities (targets) in ultrasound images using little or no a priori information and requiring little operator intervention. The scheme is a combination of the CLEAN algorithm, originally proposed for radio astronomy, and constant false alarm rate (CFAR) processing, as developed for use in radar systems. The CLEAN algorithm identifies areas in the ultrasound image that stand out above a threshold in relation to the background; CFAR techniques allow for an adaptive, semi-automated, selection of the threshold. Neither appears to have been previously used for target detection in ultrasound images and never together in any context. As a first step towards assessing the potential of this method we used a widely used method of simulating B-mode images (Field II). We assumed the use of a 256 element linear array operating at 3.0MHz into a water-like medium containing a density of point scatterers sufficient to simulate a background of fully developed speckle. Spherical targets with diameters ranging from 0.25 to 6.0mm and contrasts ranging from 0 to 12dB relative to the background were used as test objects. Using a contrast-detail analysis, the probability of detection curves indicate these targets can be consistently detected within a speckle background. Our results indicate that the method has considerable promise for the semi-automated detection of abnormalities with diameters greater than a few millimeters, depending on the contrast. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Blind information-theoretic multiuser detection algorithms for DS-CDMA and WCDMA downlink systems.

    Science.gov (United States)

    Waheed, Khuram; Salem, Fathi M

    2005-07-01

    Code division multiple access (CDMA) is based on the spread-spectrum technology and is a dominant air interface for 2.5G, 3G, and future wireless networks. For the CDMA downlink, the transmitted CDMA signals from the base station (BS) propagate through a noisy multipath fading communication channel before arriving at the receiver of the user equipment/mobile station (UE/MS). Classical CDMA single-user detection (SUD) algorithms implemented in the UE/MS receiver do not provide the required performance for modern high data-rate applications. In contrast, multi-user detection (MUD) approaches require a lot of a priori information not available to the UE/MS. In this paper, three promising adaptive Riemannian contra-variant (or natural) gradient based user detection approaches, capable of handling the highly dynamic wireless environments, are proposed. The first approach, blind multiuser detection (BMUD), is the process of simultaneously estimating multiple symbol sequences associated with all the users in the downlink of a CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. This approach is applicable to CDMA systems with relatively short spreading codes but becomes impractical for systems using long spreading codes. We also propose two other adaptive approaches, namely, RAKE -blind source recovery (RAKE-BSR) and RAKE-principal component analysis (RAKE-PCA) that fuse an adaptive stage into a standard RAKE receiver. This adaptation results in robust user detection algorithms with performance exceeding the linear minimum mean squared error (LMMSE) detectors for both Direct Sequence CDMA (DS-CDMA) and wide-band CDMA (WCDMA) systems under conditions of congestion, imprecise channel estimation and unmodeled multiple access interference (MAI).

  3. An optimized image analysis algorithm for detecting nuclear signals in digital whole slides for histopathology.

    Science.gov (United States)

    Paulik, Róbert; Micsik, Tamás; Kiszler, Gábor; Kaszál, Péter; Székely, János; Paulik, Norbert; Várhalmi, Eszter; Prémusz, Viktória; Krenács, Tibor; Molnár, Béla

    2017-06-01

    Nuclear estrogen receptor (ER), progesterone receptor (PR) and Ki-67 protein positive tumor cell fractions are semiquantitatively assessed in breast cancer for prognostic and predictive purposes. These biomarkers are usually revealed using immunoperoxidase methods resulting in diverse signal intensity and frequent inhomogeneity in tumor cell nuclei, which are routinely scored and interpreted by a pathologist during conventional light-microscopic examination. In the last decade digital pathology-based whole slide scanning and image analysis algorithms have shown tremendous development to support pathologists in this diagnostic process, which can directly influence patient selection for targeted- and chemotherapy. We have developed an image analysis algorithm optimized for whole slide quantification of nuclear immunostaining signals of ER, PR, and Ki-67 proteins in breast cancers. In this study, we tested the consistency and reliability of this system both in a series of brightfield and DAPI stained fluorescent samples. Our method allows the separation of overlapping cells and signals, reliable detection of vesicular nuclei and background compensation, especially in FISH stained slides. Detection accuracy and the processing speeds were validated in routinely immunostained breast cancer samples of varying reaction intensities and image qualities. Our technique supported automated nuclear signal detection with excellent efficacy: Precision Rate/Positive Predictive Value was 90.23 ± 4.29%, while Recall Rate/Sensitivity was 88.23 ± 4.84%. These factors and average counting speed of our algorithm were compared with two other open source applications (QuPath and CellProfiler) and resulted in 6-7% higher Recall Rate, while 4- to 30-fold higher processing speed. In conclusion, our image analysis algorithm can reliably detect and count nuclear signals in digital whole slides or any selected large areas i.e. hot spots, thus can support pathologists in assessing

  4. Noise-reducing algorithms do not necessarily provide superior dose optimisation for hepatic lesion detection with multidetector CT

    Science.gov (United States)

    Dobeli, K L; Lewis, S J; Meikle, S R; Thiele, D L; Brennan, P C

    2013-01-01

    Objective: To compare the dose-optimisation potential of a smoothing filtered backprojection (FBP) and a hybrid FBP/iterative algorithm to that of a standard FBP algorithm at three slice thicknesses for hepatic lesion detection with multidetector CT. Methods: A liver phantom containing a 9.5-mm opacity with a density of 10 HU below background was scanned at 125, 100, 75, 50 and 25 mAs. Data were reconstructed with standard FBP (B), smoothing FBP (A) and hybrid FBP/iterative (iDose4) algorithms at 5-, 3- and 1-mm collimation. 10 observers marked opacities using a four-point confidence scale. Jackknife alternative free-response receiver operating characteristic figure of merit (FOM), sensitivity and noise were calculated. Results: Compared with the 125-mAs/5-mm setting for each algorithm, significant reductions in FOM (p<0.05) and sensitivity (p<0.05) were found for all three algorithms for all exposures at 1-mm thickness and for all slice thicknesses at 25 mAs, with the exception of the 25-mAs/5-mm setting for the B algorithm. Sensitivity was also significantly reduced for all exposures at 3-mm thickness for the A algorithm (p<0.05). Noise for the A and iDose4 algorithms was approximately 13% and 21% lower, respectively, than for the B algorithm. Conclusion: Superior performance for hepatic lesion detection was not shown with either a smoothing FBP algorithm or a hybrid FBP/iterative algorithm compared with a standard FBP technique, even though noise reduction with thinner slices was demonstrated with the alternative approaches. Advances in knowledge: Reductions in image noise with non-standard CT algorithms do not necessarily translate to an improvement in low-contrast object detection. PMID:23392194

  5. Validation of a nonrigid registration error detection algorithm using clinical MRI brain data.

    Science.gov (United States)

    Datteri, Ryan D; Liu, Yuan; D'Haese, Pierre-Francois; Dawant, Benoit M

    2015-01-01

    Identification of error in nonrigid registration is a critical problem in the medical image processing community. We recently proposed an algorithm that we call "Assessing Quality Using Image Registration Circuits" (AQUIRC) to identify nonrigid registration errors and have tested its performance using simulated cases. In this paper, we extend our previous work to assess AQUIRC's ability to detect local nonrigid registration errors and validate it quantitatively at specific clinical landmarks, namely the anterior commissure and the posterior commissure. To test our approach on a representative range of error we utilize five different registration methods and use 100 target images and nine atlas images. Our results show that AQUIRC's measure of registration quality correlates with the true target registration error (TRE) at these selected landmarks with an R(2)=0.542. To compare our method to a more conventional approach, we compute local normalized correlation coefficient (LNCC) and show that AQUIRC performs similarly. However, a multi-linear regression performed with both AQUIRC's measure and LNCC shows a higher correlation with TRE than correlations obtained with either measure alone, thus showing the complementarity of these quality measures. We conclude the paper by showing that the AQUIRC algorithm can be used to reduce registration errors for all five algorithms.

  6. Validation of a Non-Rigid Registration Error Detection Algorithm Using Clinical MRI Brain Data

    Science.gov (United States)

    Datteri, Ryan D.; Liu, Yuan; D’Haese, Pierre-François; Dawant, Benoit M.

    2014-01-01

    Identification of error in non-rigid registration is a critical problem in the medical image processing community. We recently proposed an algorithm that we call “Assessing Quality Using Image Registration Circuits” (AQUIRC) to identify non-rigid registration errors and have tested its performance using simulated cases. In this article, we extend our previous work to assess AQUIRC’s ability to detect local non-rigid registration errors and validate it quantitatively at specific clinical landmarks, namely the Anterior Commissure (AC) and the Posterior Commissure (PC). To test our approach on a representative range of error we utilize 5 different registration methods and use 100 target images and 9 atlas images. Our results show that AQUIRC’s measure of registration quality correlates with the true target registration error (TRE) at these selected landmarks with an R2 = 0.542. To compare our method to a more conventional approach, we compute Local Normalized Correlation Coefficient (LNCC) and show that AQUIRC performs similarly. However, a multi-linear regression performed with both AQUIRC’s measure and LNCC shows a higher correlation with TRE than correlations obtained with either measure alone, thus showing the complementarity of these quality measures. We conclude the article by showing that the AQUIRC algorithm can be used to reduce registration errors for all five algorithms. PMID:25095252

  7. Detection and mapping vegetation cover based on the Spectral Angle Mapper algorithm using NOAA AVHRR data

    Science.gov (United States)

    Yagoub, Houria; Belbachir, Ahmed Hafid; Benabadji, Noureddine

    2014-06-01

    Satellite data, taken from the National Oceanic and Atmospheric Administration (NOAA) have been proposed and used for the detection and the cartography of vegetation cover in North Africa. The data used were acquired at the Analysis and Application of Radiation Laboratory (LAAR) from the Advanced Very High Resolution Radiometer (AVHRR) sensor of 1 km spatial resolution. The Spectral Angle Mapper Algorithm (SAM) is used for the classification of many studies using high resolution satellite data. In the present paper, we propose to apply the SAM algorithm to the moderate resolution of the NOAA AVHRR sensor data for classifying the vegetation cover. This study allows also exploiting other classification methods for the low resolution. First, the normalized difference vegetation index (NDVI) is extracted from two channels 1 and 2 of the AVHRR sensor. In order to obtain an initial density representation of vegetal formation distribution, a methodology, based on the combination between the threshold method and the decision tree, is used. This combination is carried out due to the lack of accurate data related to the thresholds that delimit each class. In a second time, and based on spectral behavior, a vegetation cover map is developed using SAM algorithm. Finally, with the use of low resolution satellite images (NOAA AVHRR) and with only two channels, it is possible to identify the most dominant species in North Africa such as: forests of the Liege oaks, other forests, cereal's cultivation, steppes and bar soil.

  8. SAR image change detection algorithm based on stationary wavelet and bi-dimensional intrinsic mode function

    Science.gov (United States)

    Huang, S. Q.; Wang, Z. L.; Xie, T. G.; Li, Z. C.

    2017-09-01

    Speckle noise in synthetic aperture radar (SAR) image is produced by the coherent imaging mechanism, which brings a great impact on the change information acquisition of multi-temporal SAR images. Two-dimensional stationary wavelet transform (SWT) and bi-dimensional empirical mode decomposition (BEMD) are the non-stationary signal processing theory of multi-scale transform. According to their implementation process and SAR image characteristic, this paper proposed a new multi-temporal SAR image change detection method based on the combination of the stationary wavelet transform and the bi-dimensional intrinsic mode function (BIMF) features, called SWT-BIMF algorithm. The contribution of the new algorithm includes two aspects. One is the design of the two selections of decomposition features, that is, the speckle noise filtering; another is the selected features to perform the enhance processing, so more effective change information will obtain. The feasibility of the SWT-BIMF algorithm is verified by the measured SAR image data, and good experimental results are obtained.

  9. HARDWARE REALIZATION OF CANNY EDGE DETECTION ALGORITHM FOR UNDERWATER IMAGE SEGMENTATION USING FIELD PROGRAMMABLE GATE ARRAYS

    Directory of Open Access Journals (Sweden)

    ALEX RAJ S. M.

    2017-09-01

    Full Text Available Underwater images raise new challenges in the field of digital image processing technology in recent years because of its widespread applications. There are many tangled matters to be considered in processing of images collected from water medium due to the adverse effects imposed by the environment itself. Image segmentation is preferred as basal stage of many digital image processing techniques which distinguish multiple segments in an image and reveal the hidden crucial information required for a peculiar application. There are so many general purpose algorithms and techniques that have been developed for image segmentation. Discontinuity based segmentation are most promising approach for image segmentation, in which Canny Edge detection based segmentation is more preferred for its high level of noise immunity and ability to tackle underwater environment. Since dealing with real time underwater image segmentation algorithm, which is computationally complex enough, an efficient hardware implementation is to be considered. The FPGA based realization of the referred segmentation algorithm is presented in this paper.

  10. A Cluster-Based Fuzzy Fusion Algorithm for Event Detection in Heterogeneous Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    ZiQi Hao

    2015-01-01

    Full Text Available As limited energy is one of the tough challenges in wireless sensor networks (WSN, energy saving becomes important in increasing the lifecycle of the network. Data fusion enables combining information from several sources thus to provide a unified scenario, which can significantly save sensor energy and enhance sensing data accuracy. In this paper, we propose a cluster-based data fusion algorithm for event detection. We use k-means algorithm to form the nodes into clusters, which can significantly reduce the energy consumption of intracluster communication. Distances between cluster heads and event and energy of clusters are fuzzified, thus to use a fuzzy logic to select the clusters that will participate in data uploading and fusion. Fuzzy logic method is also used by cluster heads for local decision, and then the local decision results are sent to the base station. Decision-level fusion for final decision of event is performed by base station according to the uploaded local decisions and fusion support degree of clusters calculated by fuzzy logic method. The effectiveness of this algorithm is demonstrated by simulation results.

  11. Sensitivities of a cyclone detection and tracking algorithm: individual tracks and climatology

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, J.G.; Speth, P. [Inst. fuer Geophysik und Meteorologie, Univ. zu Koeln (Germany); Spangehl, T.; Ulbrich, U. [Inst. fuer Geophysik und Meteorologie, Univ. zu Koeln (Germany); Inst. fuer Meteorologie, Freie Univ. Berlin (Germany)

    2005-12-01

    Northern Hemisphere cyclone activity is assessed by applying an algorithm for the detection and tracking of synoptic scale cyclones to mean sea level pressure data. The method, originally developed for the Southern Hemisphere, is adapted for application in the Northern Hemisphere winter season. NCEP-Reanalysis data from 1958/59 to 1997/98 are used as input. The sensitivities of the results to particular parameters of the algorithm are discussed for both case studies and from a climatological point of view. Results show that the choice of settings is of major relevance especially for the tracking of smaller scale and fast moving systems. With an appropriate setting the algorithm is capable of automatically tracking different types of cyclones at the same time: Both fast moving and developing systems over the large ocean basins and smaller scale cyclones over the Mediterranean basin can be assessed. The climatology of cyclone variables, e.g., cyclone track density, cyclone counts, intensification rates, propagation speeds and areas of cyclogenesis and -lysis gives detailed information on typical cyclone life cycles for different regions. The lowering of the spatial and temporal resolution of the input data from full resolution T62/06h to T42/12h decreases the cyclone track density and cyclone counts. Reducing the temporal resolution alone contributes to a decline in the number of fast moving systems, which is relevant for the cyclone track density. Lowering spatial resolution alone mainly reduces the number of weak cyclones. (orig.)

  12. Replication of pSV2-gpt in COS-1 cells: stability of plasmid DNA in the presence and absence of biochemical selection.

    Science.gov (United States)

    Tsui, L C; Breitman, M L

    1985-03-01

    We have previously demonstrated that COS-1 cell lines transformed by pSV2-gpt and maintained under biochemical selection replicate multiple copies of extrachromosomal plasmid DNA (1). We have now examined the replication and stability of this DNA in a representative cell line. In situ hybridization analyses revealed that intense replication of pSV2-gpt occurs in only a small subpopulation of cells and results from bursts of plasmid replication that occur periodically and spontaneously in the cell population. This suggests that COS-1 cells are only semipermissive for pSV2-gpt replication. No correlation was observed between levels of pSV2-gpt replication and the presence or absence of biochemical selection for the Gpt marker. However, growth of cells under nonselective conditions led to a rapid and progressive loss of pSV2-gpt DNA. This loss correlated with segregation of Gpt- revertants that lacked detectable plasmid sequences. Hence, maintenance of pSV2-gpt in the cell line was dependent on continuous biochemical selection. Stable replication of pSV2-gpt could be observed as late as four months after transfection, suggesting that this system might be useful for propagation of cloned DNA in COS-1 cells for extended periods of time. However, by nine months, extensive rearrangements of pSV2-gpt sequences were detected, indicating ultimate instability of the plasmid in the host cells.

  13. A novel algorithm based on wavelet transform for ship target detection in optical remote sensing images

    Science.gov (United States)

    Huang, Bo; Xu, Tingfa; Chen, Sining; Huang, Tingting

    2017-07-01

    The rapid development of the satellite observation technology provides a very rich source of data for sea reconnaissance and ships surveillance. In the face of such a vast sea remote sensing data, it is urgent need to realize the automatic ship detection in optical remote sensing images, but the optical remote sensing images are easily affected by meteorological conditions, such as clouds, waves, which results in larger false alarm; and the weak contrast between optical remote sensing image target and background is easy to cause missing alarm. In this paper, a novel algorithm based on wavelet transform for ship target detection in optical remote sensing images is proposed, which can effectively remove these noise and interference. The segmentation of sea and land background is first applied to the image preprocessing to achieve more accurate detection results, and then discrete wavelet transform is used to deal with the part of sea background. The results show that almost all of the offshore ships can be detected, and through the comparison of the results of four different wavelet basis functions, the accuracy of ship detection is further improved.

  14. [A quick algorithm of dynamic spectrum photoelectric pulse wave detection based on LabVIEW].

    Science.gov (United States)

    Lin, Ling; Li, Na; Li, Gang

    2010-02-01

    Dynamic spectrum (DS) detection is attractive among the numerous noninvasive blood component detection methods because of the elimination of the main interference of the individual discrepancy and measure conditions. DS is a kind of spectrum extracted from the photoelectric pulse wave and closely relative to the artery blood. It can be used in a noninvasive blood component concentration examination. The key issues in DS detection are high detection precision and high operation speed. The precision of measure can be advanced by making use of over-sampling and lock-in amplifying on the pick-up of photoelectric pulse wave in DS detection. In the present paper, the theory expression formula of the over-sampling and lock-in amplifying method was deduced firstly. Then in order to overcome the problems of great data and excessive operation brought on by this technology, a quick algorithm based on LabVIEW and a method of using external C code applied in the pick-up of photoelectric pulse wave were presented. Experimental verification was conducted in the environment of LabVIEW. The results show that by the method pres ented, the speed of operation was promoted rapidly and the data memory was reduced largely.

  15. A CCTV system with SMS alert (CMDSA): An implementation of pixel processing algorithm for motion detection

    Science.gov (United States)

    Rahman, Nurul Hidayah Ab; Abdullah, Nurul Azma; Hamid, Isredza Rahmi A.; Wen, Chuah Chai; Jelani, Mohamad Shafiqur Rahman Mohd

    2017-10-01

    Closed-Circuit TV (CCTV) system is one of the technologies in surveillance field to solve the problem of detection and monitoring by providing extra features such as email alert or motion detection. However, detecting and alerting the admin on CCTV system may complicate due to the complexity to integrate the main program with an external Application Programming Interface (API). In this study, pixel processing algorithm is applied due to its efficiency and SMS alert is added as an alternative solution for users who opted out email alert system or have no Internet connection. A CCTV system with SMS alert (CMDSA) was developed using evolutionary prototyping methodology. The system interface was implemented using Microsoft Visual Studio while the backend components, which are database and coding, were implemented on SQLite database and C# programming language, respectively. The main modules of CMDSA are motion detection, capturing and saving video, image processing and Short Message Service (SMS) alert functions. Subsequently, the system is able to reduce the processing time making the detection process become faster, reduce the space and memory used to run the program and alerting the system admin instantly.

  16. A fast weak motif-finding algorithm based on community detection in graphs.

    Science.gov (United States)

    Jia, Caiyan; Carson, Matthew B; Yu, Jian

    2013-07-17

    Identification of transcription factor binding sites (also called 'motif discovery') in DNA sequences is a basic step in understanding genetic regulation. Although many successful programs have been developed, the problem is far from being solved on account of diversity in gene expression/regulation and the low specificity of binding sites. State-of-the-art algorithms have their own constraints (e.g., high time or space complexity for finding long motifs, low precision in identification of weak motifs, or the OOPS constraint: one occurrence of the motif instance per sequence) which limit their scope of application. In this paper, we present a novel and fast algorithm we call TFBSGroup. It is based on community detection from a graph and is used to discover long and weak (l,d) motifs under the ZOMOPS constraint (zero, one or multiple occurrence(s) of the motif instance(s) per sequence), where l is the length of a motif and d is the maximum number of mutations between a motif instance and the motif itself. Firstly, TFBSGroup transforms the (l, d) motif search in sequences to focus on the discovery of dense subgraphs within a graph. It identifies these subgraphs using a fast community detection method for obtaining coarse-grained candidate motifs. Next, it greedily refines these candidate motifs towards the true motif within their own communities. Empirical studies on synthetic (l, d) samples have shown that TFBSGroup is very efficient (e.g., it can find true (18, 6), (24, 8) motifs within 30 seconds). More importantly, the algorithm has succeeded in rapidly identifying motifs in a large data set of prokaryotic promoters generated from the Escherichia coli database RegulonDB. The algorithm has also accurately identified motifs in ChIP-seq data sets for 12 mouse transcription factors involved in ES cell pluripotency and self-renewal. Our novel heuristic algorithm, TFBSGroup, is able to quickly identify nearly exact matches for long and weak (l, d) motifs in DNA

  17. An algorithm for lunar crater accurate boundary detection based on DEM data

    Science.gov (United States)

    Zeng, Xingguo; Zhang, Hongbo; Chen, Wangli

    2017-10-01

    Crater is one of the most significant topographic features which are widely distributed around the lunar surface. Most of the craters are originated from the impact of small bodies into the lunar surface and some of them may also be formed by the geological evolution from the interior of the Moon. Since the crater dating is a classical way to predict the relative ages of the lunar geological layers, and the geomorphologic parameters of the craters such as the diameter, depth, and boundary are also of value to study the situation related to lunar surface space weathering and geological evolution. In this case, the identification of lunar craters from lunar exploration data has always been a fundamental work for lunar crater study. Nowadays, more and more high resolution DOM and DEM data have been acquired and released by different lunar exploration missions such as LRO, SELENE and Chang’ e Missions. Besides that, many crater identification methods based on image recognition have been developed to automatic identify lunar craters. Although the location and general shape of the craters could be roughly detected with these methods, however, most of them couldn’t provide an accuracy boundary for the crater. Without an accurate boundary for the crater, the accurate diameter and other parameters for the crater could not be detected accurately, which might impede the lunar crater study. To solve this problem, we developed an algorithm which could detect the accurate boundary for the crater from the DEM data. The main idea of this algorithm is that. Firstly, we searched for the lowest point in the center area of the crater, and then, we tried to calculate the elevation profile from this lowest point to the point around the crater rim, in order to get the top points near the crater rim, after that, least square method is used to circle fit these top points, and at last, a accurate circle would be created which could be considered as the accurate boundary of this crater

  18. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    Science.gov (United States)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  19. Computer algorithms for automated detection and analysis of local Ca2+ releases in spontaneously beating cardiac pacemaker cells.

    Directory of Open Access Journals (Sweden)

    Alexander V Maltsev

    Full Text Available Local Ca2+ Releases (LCRs are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame sensitivity algorithm applied to each pixel (cell location. An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves, sparks and

  20. Computer algorithms for automated detection and analysis of local Ca2+ releases in spontaneously beating cardiac pacemaker cells.

    Science.gov (United States)

    Maltsev, Alexander V; Parsons, Sean P; Kim, Mary S; Tsutsui, Kenta; Stern, Michael D; Lakatta, Edward G; Maltsev, Victor A; Monfredi, Oliver

    2017-01-01

    Local Ca2+ Releases (LCRs) are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA) node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame) sensitivity algorithm applied to each pixel (cell location). An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves), sparks and embers in muscle

  1. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    Science.gov (United States)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  2. The Invisibles: A Detection Algorithm to Trace the Faintest Milky Way Satellites

    Science.gov (United States)

    Walsh, S. M.; Willman, B.; Jerjen, H.

    2009-01-01

    A specialized data-mining algorithm has been developed using wide-field photometry catalogs, enabling systematic and efficient searches for resolved, extremely low surface brightness satellite galaxies in the halo of the Milky Way (MW). Tested and calibrated with the Sloan Digital Sky Survey Data Release 6 (SDSS-DR6) we recover all 15 MW satellites recently detected in SDSS, six known MW/Local Group dSphs in the SDSS footprint, and 19 previously known globular and open clusters. In addition, 30 point-source overdensities have been found that correspond to no cataloged objects. The detection efficiencies of the algorithm have been carefully quantified by simulating more than three million model satellites embedded in star fields typical of those observed in SDSS, covering a wide range of parameters including galaxy distance, scale length, luminosity, and Galactic latitude. We present several parameterizations of these detection limits to facilitate comparison between the observed MW satellite population and predictions. We find that all known satellites would be detected with >90% efficiency over all latitudes spanned by DR6 and that the MW satellite census within DR6 is complete to a magnitude limit of MV ≈ -6.5 and a distance of 300 kpc. Assuming all existing MW satellites contain an appreciable old stellar population and have sizes and luminosities comparable with currently known companions, we predict lower and upper limit totals of 52 and 340 MW dwarf satellites within ~260 kpc if they are uniformly distributed across the sky. This result implies that many MW satellites still remain undetected. Identifying and studying these elusive satellites in future survey data will be fundamental to test the dark matter distribution on kpc scales.

  3. Structural Damage Detection with Different Objective Functions in Noisy Conditions Using an Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Faisal Shabbir

    2017-12-01

    Full Text Available Dynamic properties such as natural frequencies and mode shapes are directly affected by damage in structures. In this paper, changes in natural frequencies and mode shapes were used as the input to various objective functions for damage detection. Objective functions related to natural frequencies, mode shapes, modal flexibility and modal strain energy have been used, and their performances have been analyzed in varying noise conditions. Three beams were analyzed: two of which were simulated beams with single and multiple damage scenarios and one was an experimental beam. In order to do this, SAP 2000 (v14, Computers and Structures Inc., Berkeley, CA, United States, 2009 is linked with MATLAB (r2015, The MathWorks, Inc., Natick, MA, United States, 2015. The genetic algorithm (GA, an evolutionary algorithm (EA, was used to update the damaged structure for damage detection. Due to the degradation of the performance of objective functions in varying noisy conditions, a modified objective function based on the concept of regularization has been proposed, which can be effectively used in combination with EA. All three beams were used to validate the proposed procedure. It has been found that the modified objective function gives better results even in noisy and actual experimental conditions.

  4. A Novel Rear-End Collision Detection Algorithm Based on GNSS Fusion and ANFIS

    Directory of Open Access Journals (Sweden)

    Rui Sun

    2017-01-01

    Full Text Available Rear-end collisions are one of the most common types of accidents on roads. Global Satellite Navigation Systems (GNSS have recently become sufficiently flexible and cost-effective in order to have great potential for use in rear-end collision avoidance systems (CAS. Nevertheless, there are two main issues associated with current vehicle rear-end CAS: (1 achieving relative vehicle positioning and dynamic parameters with sufficiently high accuracy and (2 a reliable method to extract the car-following status from such information. This paper introduces a novel integrated algorithm for rear-end collision detection. Access to high accuracy positioning is enabled by GNSS, electronic compass, and lane information fusion with Cubature Kalman Filter (CKF. The judgment of the car-following status is based on the application of the Adaptive Neurofuzzy Inference System (ANFIS. The field test results show that the designed algorithm could effectively detect rear-end collisions with an accuracy of 99.61% and a false alarm rate of 5.26% in the 10 Hz output rate.

  5. Crack detection in a beam with an arbitrary number of transverse cracks using genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Khaji, N. [Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Mehrjoo, M. [Islamic Azad University, Tehran (Iran, Islamic Republic of)

    2014-03-15

    In this paper, a crack detection approach is presented for detecting depth and location of cracks in beam-like structures. For this purpose, a new beam element with an arbitrary number of embedded transverse edge cracks, in arbitrary positions of beam element with any depth, is derived. The components of the stiffness matrix for the cracked element are computed using the conjugate beam concept and Betti's theorem, and finally represented in closed-form expressions. The proposed beam element is efficiently employed for solving forward problem (i.e., to gain precise natural frequencies and mode shapes of the beam knowing the cracks' characteristics). To validate the proposed element, results obtained by new element are compared with two-dimensional (2D) finite element results and available experimental measurements. Moreover, by knowing the natural frequencies and mode shapes, an inverse problem is established in which the location and depth of cracks are determined. In the inverse approach, an optimization problem based on the new finite element and genetic algorithms (GAs) is solved to search the solution. It is shown that the present algorithm is able to identify various crack configurations in a cracked beam. The proposed approach is verified through a cracked beam containing various cracks with different depths.

  6. Simple and Robust Realtime QRS Detection Algorithm Based on Spatiotemporal Characteristic of the QRS Complex.

    Directory of Open Access Journals (Sweden)

    Jinkwon Kim

    Full Text Available The purpose of this research is to develop an intuitive and robust realtime QRS detection algorithm based on the physiological characteristics of the electrocardiogram waveform. The proposed algorithm finds the QRS complex based on the dual criteria of the amplitude and duration of QRS complex. It consists of simple operations, such as a finite impulse response filter, differentiation or thresholding without complex and computational operations like a wavelet transformation. The QRS detection performance is evaluated by using both an MIT-BIH arrhythmia database and an AHA ECG database (a total of 435,700 beats. The sensitivity (SE and positive predictivity value (PPV were 99.85% and 99.86%, respectively. According to the database, the SE and PPV were 99.90% and 99.91% in the MIT-BIH database and 99.84% and 99.84% in the AHA database, respectively. The result of the noisy environment test using record 119 from the MIT-BIH database indicated that the proposed method was scarcely affected by noise above 5 dB SNR (SE = 100%, PPV > 98% without the need for an additional de-noising or back searching process.

  7. Simple and Robust Realtime QRS Detection Algorithm Based on Spatiotemporal Characteristic of the QRS Complex.

    Science.gov (United States)

    Kim, Jinkwon; Shin, Hangsik

    2016-01-01

    The purpose of this research is to develop an intuitive and robust realtime QRS detection algorithm based on the physiological characteristics of the electrocardiogram waveform. The proposed algorithm finds the QRS complex based on the dual criteria of the amplitude and duration of QRS complex. It consists of simple operations, such as a finite impulse response filter, differentiation or thresholding without complex and computational operations like a wavelet transformation. The QRS detection performance is evaluated by using both an MIT-BIH arrhythmia database and an AHA ECG database (a total of 435,700 beats). The sensitivity (SE) and positive predictivity value (PPV) were 99.85% and 99.86%, respectively. According to the database, the SE and PPV were 99.90% and 99.91% in the MIT-BIH database and 99.84% and 99.84% in the AHA database, respectively. The result of the noisy environment test using record 119 from the MIT-BIH database indicated that the proposed method was scarcely affected by noise above 5 dB SNR (SE = 100%, PPV > 98%) without the need for an additional de-noising or back searching process.

  8. Fault Detection of Roller-Bearings Using Signal Processing and Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Dae-Ho Kwak

    2013-12-01

    Full Text Available This study presents a fault detection of roller bearings through signal processing and optimization techniques. After the occurrence of scratch-type defects on the inner race of bearings, variations of kurtosis values are investigated in terms of two different data processing techniques: minimum entropy deconvolution (MED, and the Teager-Kaiser Energy Operator (TKEO. MED and the TKEO are employed to qualitatively enhance the discrimination of defect-induced repeating peaks on bearing vibration data with measurement noise. Given the perspective of the execution sequence of MED and the TKEO, the study found that the kurtosis sensitivity towards a defect on bearings could be highly improved. Also, the vibration signal from both healthy and damaged bearings is decomposed into multiple intrinsic mode functions (IMFs, through empirical mode decomposition (EMD. The weight vectors of IMFs become design variables for a genetic algorithm (GA. The weights of each IMF can be optimized through the genetic algorithm, to enhance the sensitivity of kurtosis on damaged bearing signals. Experimental results show that the EMD-GA approach successfully improved the resolution of detectability between a roller bearing with defect, and an intact system.

  9. A 3-Step Algorithm Using Region-Based Active Contours for Video Objects Detection

    Directory of Open Access Journals (Sweden)

    Stéphanie Jehan-Besson

    2002-06-01

    Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.

  10. Development of Algorithms and Error Analyses for the Short Baseline Lightning Detection and Ranging System

    Science.gov (United States)

    Starr, Stanley O.

    1998-01-01

    NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.

  11. Hail detection algorithm for the Global Precipitation Measuring mission core satellite sensors

    Science.gov (United States)

    Mroz, Kamil; Battaglia, Alessandro; Lang, Timothy J.; Tanelli, Simone; Cecil, Daniel J.; Tridon, Frederic

    2017-04-01

    By exploiting an abundant number of extreme storms observed simultaneously by the Global Precipitation Measurement (GPM) mission core satellite's suite of sensors and by the ground-based S-band Next-Generation Radar (NEXRAD) network over continental US, proxies for the identification of hail are developed based on the GPM core satellite observables. The full capabilities of the GPM observatory are tested by analyzing more than twenty observables and adopting the hydrometeor classification based on ground-based polarimetric measurements as truth. The proxies have been tested using the Critical Success Index (CSI) as a verification measure. The hail detection algorithm based on the mean Ku reflectivity in the mixed-phase layer performs the best, out of all considered proxies (CSI of 45%). Outside the Dual frequency Precipitation Radar (DPR) swath, the Polarization Corrected Temperature at 18.7 GHz shows the greatest potential for hail detection among all GMI channels (CSI of 26% at a threshold value of 261 K). When dual variable proxies are considered, the combination involving the mixed-phase reflectivity values at both Ku and Ka-bands outperforms all the other proxies, with a CSI of 49%. The best-performing radar-radiometer algorithm is based on the mixed-phase reflectivity at Ku-band and on the brightness temperature (TB) at 10.7 GHz (CSI of 46%). When only radiometric data are available, the algorithm based on the TBs at 36.6 and 166 GHz is the most efficient, with a CSI of 27.5%.

  12. Lost in Virtual Reality: Pathfinding Algorithms Detect Rock Fractures and Contacts in Point Clouds

    Science.gov (United States)

    Thiele, S.; Grose, L.; Micklethwaite, S.

    2016-12-01

    UAV-based photogrammetric and LiDAR techniques provide high resolution 3D point clouds and ortho-rectified photomontages that can capture surface geology in outstanding detail over wide areas. Automated and semi-automated methods are vital to extract full value from these data in practical time periods, though the nuances of geological structures and materials (natural variability in colour and geometry, soft and hard linkage, shadows and multiscale properties) make this a challenging task. We present a novel method for computer assisted trace detection in dense point clouds, using a lowest cost path solver to "follow" fracture traces and lithological contacts between user defined end points. This is achieved by defining a local neighbourhood network where each point in the cloud is linked to its neighbours, and then using a least-cost path algorithm to search this network and estimate the trace of the fracture or contact. A variety of different algorithms can then be applied to calculate the best fit plane, produce a fracture network, or map properties such as roughness, curvature and fracture intensity. Our prototype of this method (Fig. 1) suggests the technique is feasible and remarkably good at following traces under non-optimal conditions such as variable-shadow, partial occlusion and complex fracturing. Furthermore, if a fracture is initially mapped incorrectly, the user can easily provide further guidance by defining intermediate waypoints. Future development will include optimization of the algorithm to perform well on large point clouds and modifications that permit the detection of features such as step-overs. We also plan on implementing this approach in an interactive graphical user environment.

  13. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.

    Science.gov (United States)

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.

  14. Influenza detection and prediction algorithms: comparative accuracy trial in Östergötland county, Sweden, 2008-2012.

    Science.gov (United States)

    Spreco, A; Eriksson, O; Dahlström, Ö; Timpka, T

    2017-07-01

    Methods for the detection of influenza epidemics and prediction of their progress have seldom been comparatively evaluated using prospective designs. This study aimed to perform a prospective comparative trial of algorithms for the detection and prediction of increased local influenza activity. Data on clinical influenza diagnoses recorded by physicians and syndromic data from a telenursing service were used. Five detection and three prediction algorithms previously evaluated in public health settings were calibrated and then evaluated over 3 years. When applied on diagnostic data, only detection using the Serfling regression method and prediction using the non-adaptive log-linear regression method showed acceptable performances during winter influenza seasons. For the syndromic data, none of the detection algorithms displayed a satisfactory performance, while non-adaptive log-linear regression was the best performing prediction method. We conclude that evidence was found for that available algorithms for influenza detection and prediction display satisfactory performance when applied on local diagnostic data during winter influenza seasons. When applied on local syndromic data, the evaluated algorithms did not display consistent performance. Further evaluations and research on combination of methods of these types in public health information infrastructures for 'nowcasting' (integrated detection and prediction) of influenza activity are warranted.

  15. Detection of Spam Email by Combining Harmony Search Algorithm and Decision Tree

    Directory of Open Access Journals (Sweden)

    M. Z. Gashti

    2017-06-01

    Full Text Available Spam emails is probable the main problem faced by most e-mail users. There are many features in spam email detection and some of these features have little effect on detection and cause skew detection and classification of spam email. Thus, Feature Selection (FS is one of the key topics in spam email detection systems. With choosing the important and effective features in classification, its performance can be optimized. Selector features has the task of finding a subset of features to improve the accuracy of its predictions. In this paper, a hybrid of Harmony Search Algorithm (HSA and decision tree is used for selecting the best features and classification. The obtained results on Spam-base dataset show that the rate of recognition accuracy in the proposed model is 95.25% which is high in comparison with models such as SVM, NB, J48 and MLP. Also, the accuracy of the proposed model on the datasets of Ling-spam and PU1 is high in comparison with models such as NB, SVM and LR.

  16. A high-performance seizure detection algorithm based on Discrete Wavelet Transform (DWT) and EEG.

    Science.gov (United States)

    Chen, Duo; Wan, Suiren; Xiang, Jing; Bao, Forrest Sheng

    2017-01-01

    In the past decade, Discrete Wavelet Transform (DWT), a powerful time-frequency tool, has been widely used in computer-aided signal analysis of epileptic electroencephalography (EEG), such as the detection of seizures. One of the important hurdles in the applications of DWT is the settings of DWT, which are chosen empirically or arbitrarily in previous works. The objective of this study aimed to develop a framework for automatically searching the optimal DWT settings to improve accuracy and to reduce computational cost of seizure detection. To address this, we developed a method to decompose EEG data into 7 commonly used wavelet families, to the maximum theoretical level of each mother wavelet. Wavelets and decomposition levels providing the highest accuracy in each wavelet family were then searched in an exhaustive selection of frequency bands, which showed optimal accuracy and low computational cost. The selection of frequency bands and features removed approximately 40% of redundancies. The developed algorithm achieved promising performance on two well-tested EEG datasets (accuracy >90% for both datasets). The experimental results of the developed method have demonstrated that the settings of DWT affect its performance on seizure detection substantially. Compared with existing seizure detection methods based on wavelet, the new approach is more accurate and transferable among datasets.

  17. Electroencephalogram Signal Classification for Automated Epileptic Seizure Detection Using Genetic Algorithm.

    Science.gov (United States)

    Nanthini, B Suguna; Santhi, B

    2017-01-01

    Epilepsy causes when the repeated seizure occurs in the brain. Electroencephalogram (EEG) test provides valuable information about the brain functions and can be useful to detect brain disorder, especially for epilepsy. In this study, application for an automated seizure detection model has been introduced successfully. The EEG signals are decomposed into sub-bands by discrete wavelet transform using db2 (daubechies) wavelet. The eight statistical features, the four gray level co-occurrence matrix and Renyi entropy estimation with four different degrees of order, are extracted from the raw EEG and its sub-bands. Genetic algorithm (GA) is used to select eight relevant features from the 16 dimension features. The model has been trained and tested using support vector machine (SVM) classifier successfully for EEG signals. The performance of the SVM classifier is evaluated for two different databases. The study has been experimented through two different analyses and achieved satisfactory performance for automated seizure detection using relevant features as the input to the SVM classifier. Relevant features using GA give better accuracy performance for seizure detection.

  18. Development of hybrid fog detection algorithm (FDA) using satellite and ground observation data for nighttime

    Science.gov (United States)

    Kim, So-Hyeong; Han, Ji-Hae; Suh, Myoung-Seok

    2017-04-01

    In this study, we developed a hybrid fog detection algorithm (FDA) using AHI/Himawari-8 satellite and ground observation data for nighttime. In order to detect fog at nighttime, Dual Channel Difference (DCD) method based on the emissivity difference between SWIR and IR1 is most widely used. DCD is good at discriminating fog from other things (middle/high clouds, clear sea and land). However, it is difficult to distinguish fog from low clouds. In order to separate the low clouds from the pixels that satisfy the thresholds of fog in the DCD test, we conducted supplementary tests such as normalized local standard derivation (NLSD) of BT11 and the difference of fog top temperature (BT11) and air temperature (Ta) from NWP data (SST from OSTIA data). These tests are based on the larger homogeneity of fog top than low cloud tops and the similarity of fog top temperature and Ta (SST). Threshold values for the three tests were optimized through ROC analysis for the selected fog cases. In addition, considering the spatial continuity of fog, post-processing was performed to detect the missed pixels, in particular, at edge of fog or sub-pixel size fog. The final fog detection results are presented by fog probability (0 100 %). Validation was conducted by comparing fog detection probability with the ground observed visibility data from KMA. The validation results showed that POD and FAR are ranged from 0.70 0.94 and 0.45 0.72, respectively. The quantitative validation and visual inspection indicate that current FDA has a tendency to over-detect the fog. So, more works which reducing the FAR is needed. In the future, we will also validate sea fog using CALIPSO data.

  19. A Novel Algorithm for the Automatic Detection of Sleep Apnea From Single-Lead ECG.

    Science.gov (United States)

    Varon, Carolina; Caicedo, Alexander; Testelmans, Dries; Buyse, Bertien; Van Huffel, Sabine

    2015-09-01

    This paper presents a methodology for the automatic detection of sleep apnea from single-lead ECG. It uses two novel features derived from the ECG, and two well-known features in heart rate variability analysis, namely the standard deviation and the serial correlation coefficients of the RR interval time series. The first novel feature uses the principal components of the QRS complexes, and it describes changes in their morphology caused by an increased sympathetic activity during apnea. The second novel feature extracts the information shared between respiration and heart rate using orthogonal subspace projections. Respiratory information is derived from the ECG by means of three state-of-the-art algorithms, which are implemented and compared here. All features are used as input to a least-squares support vector machines classifier, using an RBF kernel. In total, 80 ECG recordings were included in the study. Accuracies of about 85% are achieved on a minute-by-minute basis, for two independent datasets including both hypopneas and apneas together. Separation between apnea and normal recordings is achieved with 100% accuracy. In addition to apnea classification, the proposed methodology determines the contamination level of each ECG minute. The performances achieved are comparable with those reported in the literature for fully automated algorithms. These results indicate that the use of only ECG sensors can achieve good accuracies in the detection of sleep apnea. Moreover, the contamination level of each ECG segment can be used to automatically detect artefacts, and to highlight segments that require further visual inspection.

  20. Risk of hepatotoxicity associated with the use of telithromycin: a signal detection using data mining algorithms.

    Science.gov (United States)

    Chen, Yan; Guo, Jeff J; Healy, Daniel P; Lin, Xiaodong; Patel, Nick C

    2008-12-01

    With the exception of case reports, limited data are available regarding the risk of hepatotoxicity associated with the use of telithromycin. To detect the safety signal regarding the reporting of hepatotoxicity associated with the use of telithromycin using 4 commonly employed data mining algorithms (DMAs). Based on the Adverse Events Reporting System (AERS) database of the Food and Drug Administration, 4 DMAs, including the reporting odds ratio (ROR), the proportional reporting ratio (PRR), the information component (IC), and the Gamma Poisson Shrinker (GPS), were applied to examine the association between the reporting of hepatotoxicity and the use of telithromycin. The study period was from the first quarter of 2004 to the second quarter of 2006. The reporting of hepatotoxicity was identified using the preferred terms indexed in the Medical Dictionary for Regulatory Activities. The drug name was used to identify reports regarding the use of telithromycin. A total of 226 reports describing hepatotoxicity associated with the use of telithromycin were recorded in the AERS. A safety problem of telithromycin associated with increased reporting of hepatotoxicity was clearly detected by 4 algorithms as early as 2005, signaling the problem in the first quarter by the ROR and the IC, in the second quarter by the PRR, and in the fourth quarter by the GPS. A safety signal was indicated by the 4 DMAs suggesting an association between the reporting of hepatotoxicity and the use of telithromycin. Given the wide use of telithromycin and serious consequences of hepatotoxicity, clinicians should be cautious when selecting telithromycin for treatment of an infection. In addition, further observational studies are required to evaluate the utility of signal detection systems for early recognition of serious, life-threatening, low-frequency drug-induced adverse events.

  1. Noise-reducing algorithms do not necessarily provide superior dose optimisation for hepatic lesion detection with multidetector CT.

    Science.gov (United States)

    Dobeli, K L; Lewis, S J; Meikle, S R; Thiele, D L; Brennan, P C

    2013-03-01

    To compare the dose-optimisation potential of a smoothing filtered backprojection (FBP) and a hybrid FBP/iterative algorithm to that of a standard FBP algorithm at three slice thicknesses for hepatic lesion detection with multidetector CT. A liver phantom containing a 9.5-mm opacity with a density of 10 HU below background was scanned at 125, 100, 75, 50 and 25 mAs. Data were reconstructed with standard FBP (B), smoothing FBP (A) and hybrid FBP/iterative (iDose(4)) algorithms at 5-, 3- and 1-mm collimation. 10 observers marked opacities using a four-point confidence scale. Jackknife alternative free-response receiver operating characteristic figure of merit (FOM), sensitivity and noise were calculated. Compared with the 125-mAs/5-mm setting for each algorithm, significant reductions in FOM (pSuperior performance for hepatic lesion detection was not shown with either a smoothing FBP algorithm or a hybrid FBP/iterative algorithm compared with a standard FBP technique, even though noise reduction with thinner slices was demonstrated with the alternative approaches. Reductions in image noise with non-standard CT algorithms do not necessarily translate to an improvement in low-contrast object detection.

  2. Semi-supervised spectral algorithms for community detection in complex networks based on equivalence of clustering methods

    Science.gov (United States)

    Ma, Xiaoke; Wang, Bingbo; Yu, Liang

    2018-01-01

    Community detection is fundamental for revealing the structure-functionality relationship in complex networks, which involves two issues-the quantitative function for community as well as algorithms to discover communities. Despite significant research on either of them, few attempt has been made to establish the connection between the two issues. To attack this problem, a generalized quantification function is proposed for community in weighted networks, which provides a framework that unifies several well-known measures. Then, we prove that the trace optimization of the proposed measure is equivalent with the objective functions of algorithms such as nonnegative matrix factorization, kernel K-means as well as spectral clustering. It serves as the theoretical foundation for designing algorithms for community detection. On the second issue, a semi-supervised spectral clustering algorithm is developed by exploring the equivalence relation via combining the nonnegative matrix factorization and spectral clustering. Different from the traditional semi-supervised algorithms, the partial supervision is integrated into the objective of the spectral algorithm. Finally, through extensive experiments on both artificial and real world networks, we demonstrate that the proposed method improves the accuracy of the traditional spectral algorithms in community detection.

  3. Analysis of the moderate resolution imaging spectroradiometer contextual algorithm for small fire detection, Journal of Applied Remote Sensing Vol.3

    Science.gov (United States)

    W. Wang; J.J. Qu; X. Hao; Y. Liu

    2009-01-01

    In the southeastern United States, most wildland fires are of low intensity. A substantial number of these fires cannot be detected by the MODIS contextual algorithm. To improve the accuracy of fire detection for this region, the remote-sensed characteristics of these fires have to be...

  4. Developing the surveillance algorithm for detection of failure to recognize and treat severe sepsis.

    Science.gov (United States)

    Harrison, Andrew M; Thongprayoon, Charat; Kashyap, Rahul; Chute, Christopher G; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly

    2015-02-01

    To develop and test an automated surveillance algorithm (sepsis "sniffer") for the detection of severe sepsis and monitoring failure to recognize and treat severe sepsis in a timely manner. We conducted an observational diagnostic performance study using independent derivation and validation cohorts from an electronic medical record database of the medical intensive care unit (ICU) of a tertiary referral center. All patients aged 18 years and older who were admitted to the medical ICU from January 1 through March 31, 2013 (N=587), were included. The criterion standard for severe sepsis/septic shock was manual review by 2 trained reviewers with a third superreviewer for cases of interobserver disagreement. Critical appraisal of false-positive and false-negative alerts, along with recursive data partitioning, was performed for algorithm optimization. An algorithm based on criteria for suspicion of infection, systemic inflammatory response syndrome, organ hypoperfusion and dysfunction, and shock had a sensitivity of 80% and a specificity of 96% when applied to the validation cohort. In order, low systolic blood pressure, systemic inflammatory response syndrome positivity, and suspicion of infection were determined through recursive data partitioning to be of greatest predictive value. Lastly, 117 alert-positive patients (68% of the 171 patients with severe sepsis) had a delay in recognition and treatment, defined as no lactate and central venous pressure measurement within 2 hours of the alert. The optimized sniffer accurately identified patients with severe sepsis that bedside clinicians failed to recognize and treat in a timely manner. Copyright © 2015 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  5. A hybrid lung and vessel segmentation algorithm for computer aided detection of pulmonary embolism

    Science.gov (United States)

    Raghupathi, Laks; Lakare, Sarang

    2009-02-01

    Advances in multi-detector technology have made CT pulmonary angiography (CTPA) a popular radiological tool for pulmonary emboli (PE) detection. CTPA provide rich detail of lung anatomy and is a useful diagnostic aid in highlighting even very small PE. However analyzing hundreds of slices is laborious and time-consuming for the practicing radiologist which may also cause misdiagnosis due to the presence of various PE look-alike. Computer-aided diagnosis (CAD) can be a potential second reader in providing key diagnostic information. Since PE occurs only in vessel arteries, it is important to mark this region of interest (ROI) during CAD preprocessing. In this paper, we present a new lung and vessel segmentation algorithm for extracting contrast-enhanced vessel ROI in CTPA. Existing approaches to segmentation either provide only the larger lung area without highlighting the vessels or is computationally prohibitive. In this paper, we propose a hybrid lung and vessel segmentation which uses an initial lung ROI and determines the vessels through a series of refinement steps. We first identify a coarse vessel ROI by finding the "holes" from the lung ROI. We then use the initial ROI as seed-points for a region-growing process while carefully excluding regions which are not relevant. The vessel segmentation mask covers 99% of the 259 PE from a real-world set of 107 CTPA. Further, our algorithm increases the net sensitivity of a prototype CAD system by 5-9% across all PE categories in the training and validation data sets. The average run-time of algorithm was only 100 seconds on a standard workstation.

  6. Towards Real-Time Detection of Gait Events on Different Terrains Using Time-Frequency Analysis and Peak Heuristics Algorithm.

    Science.gov (United States)

    Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin

    2016-10-01

    Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.

  7. Towards Real-Time Detection of Gait Events on Different Terrains Using Time-Frequency Analysis and Peak Heuristics Algorithm

    Directory of Open Access Journals (Sweden)

    Hui Zhou

    2016-10-01

    Full Text Available Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO and heel strike (HS gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.

  8. Early Seizure Detection by Applying Frequency-Based Algorithm Derived from the Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Jiseon Lee

    2017-08-01

    Full Text Available The use of automatic electrical stimulation in response to early seizure detection has been introduced as a new treatment for intractable epilepsy. For the effective application of this method as a successful treatment, improving the accuracy of the early seizure detection is crucial. In this paper, we proposed the application of a frequency-based algorithm derived from principal component analysis (PCA, and demonstrated improved efficacy for early seizure detection in a pilocarpine-induced epilepsy rat model. A total of 100 ictal electroencephalographs (EEG during spontaneous recurrent seizures from 11 epileptic rats were finally included for the analysis. PCA was applied to the covariance matrix of a conventional EEG frequency band signal. Two PCA results were compared: one from the initial segment of seizures (5 sec of seizure onset and the other from the whole segment of seizures. In order to compare the accuracy, we obtained the specific threshold satisfying the target performance from the training set, and compared the False Positive (FP, False Negative (FN, and Latency (Lat of the PCA based feature derived from the initial segment of seizures to the other six features in the testing set. The PCA based feature derived from the initial segment of seizures performed significantly better than other features with a 1.40% FP, zero FN, and 0.14 s Lat. These results demonstrated that the proposed frequency-based feature from PCA that captures the characteristics of the initial phase of seizure was effective for early detection of seizures. Experiments with rat ictal EEGs showed an improved early seizure detection rate with PCA applied to the covariance of the initial 5 s segment of visual seizure onset instead of using the whole seizure segment or other conventional frequency bands.

  9. Dynamic Water Surface Detection Algorithm Applied on PROBA-V Multispectral Data

    Directory of Open Access Journals (Sweden)

    Luc Bertels

    2016-12-01

    Full Text Available Water body detection worldwide using spaceborne remote sensing is a challenging task. A global scale multi-temporal and multi-spectral image analysis method for water body detection was developed. The PROBA-V microsatellite has been fully operational since December 2013 and delivers daily near-global synthesis with a spatial resolution of 1 km and 333 m. The Red, Near-InfRared (NIR and Short Wave InfRared (SWIR bands of the atmospherically corrected 10-day synthesis images are first Hue, Saturation and Value (HSV color transformed and subsequently used in a decision tree classification for water body detection. To minimize commission errors four additional data layers are used: the Normalized Difference Vegetation Index (NDVI, Water Body Potential Mask (WBPM, Permanent Glacier Mask (PGM and Volcanic Soil Mask (VSM. Threshold values on the hue and value bands, expressed by a parabolic function, are used to detect the water bodies. Beside the water bodies layer, a quality layer, based on the water bodies occurrences, is available in the output product. The performance of the Water Bodies Detection Algorithm (WBDA was assessed using Landsat 8 scenes over 15 regions selected worldwide. A mean Commission Error (CE of 1.5% was obtained while a mean Omission Error (OE of 15.4% was obtained for minimum Water Surface Ratio (WSR = 0.5 and drops to 9.8% for minimum WSR = 0.6. Here, WSR is defined as the fraction of the PROBA-V pixel covered by water as derived from high spatial resolution images, e.g., Landsat 8. Both the CE = 1.5% and OE = 9.8% (WSR = 0.6 fall within the user requirements of 15%. The WBDA is fully operational in the Copernicus Global Land Service and products are freely available.

  10. Introducing a Model for Suspicious Behaviors Detection in Electronic Banking by Using Decision Tree Algorithms

    Directory of Open Access Journals (Sweden)

    Rohulla Kosari Langari

    2014-02-01

    Full Text Available Change the world through information technology and Internet development, has created competitive knowledge in the field of electronic commerce, lead to increasing in competitive potential among organizations. In this condition The increasing rate of commercial deals developing guaranteed with speed and light quality is due to provide dynamic system of electronic banking until by using modern technology to facilitate electronic business process. Internet banking is enumerate as a potential opportunity the fundamental pillars and determinates of e-banking that in cyber space has been faced with various obstacles and threats. One of this challenge is complete uncertainty in security guarantee of financial transactions also exist of suspicious and unusual behavior with mail fraud for financial abuse. Now various systems because of intelligence mechanical methods and data mining technique has been designed for fraud detection in users’ behaviors and applied in various industrial such as insurance, medicine and banking. Main of article has been recognizing of unusual users behaviors in e-banking system. Therefore, detection behavior user and categories of emerged patterns to paper the conditions for predicting unauthorized penetration and detection of suspicious behavior. Since detection behavior user in internet system has been uncertainty and records of transactions can be useful to understand these movement and therefore among machine method, decision tree technique is considered common tool for classification and prediction, therefore in this research at first has determinate banking effective variable and weight of everything in internet behaviors production and in continuation combining of various behaviors manner draw out such as the model of inductive rules to provide ability recognizing of different behaviors. At least trend of four algorithm Chaid, ex_Chaid, C4.5, C5.0 has compared and evaluated for classification and detection of exist

  11. A Fast Algorithm of Generalized Radon-Fourier Transform for Weak Maneuvering Target Detection

    Directory of Open Access Journals (Sweden)

    Weijie Xia

    2016-01-01

    Full Text Available The generalized Radon-Fourier transform (GRFT has been proposed to detect radar weak maneuvering targets by realizing coherent integration via jointly searching in motion parameter space. Two main drawbacks of GRFT are the heavy computational burden and the blind speed side lobes (BSSL which will cause serious false alarms. The BSSL learning-based particle swarm optimization (BPSO has been proposed before to reduce the computational burden of GRFT and solve the BSSL problem simultaneously. However, the BPSO suffers from an apparent loss in detection performance compared with GRFT. In this paper, a fast implementation algorithm of GRFT using the BSSL learning-based modified wind-driven optimization (BMWDO is proposed. In the BMWDO, the BSSL learning procedure is also used to deal with the BSSL phenomenon. Besides, the MWDO adjusts the coefficients in WDO with Levy distribution and uniform distribution, and it outperforms PSO in a noisy environment. Compared with BPSO, the proposed method can achieve better detection performance with a similar computational cost. Several numerical experiments are also provided to demonstrate the effectiveness of the proposed method.

  12. Implementation of the Canny Edge Detection algorithm for a stereo vision system

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J.R.; Davis, T.A.; Lee, G.K. [North Carolina State Univ., Raleigh, NC (United States)

    1996-12-31

    There exists many applications in which three-dimensional information is necessary. For example, in manufacturing systems, parts inspection may require the extraction of three-dimensional information from two-dimensional images, through the use of a stereo vision system. In medical applications, one may wish to reconstruct a three-dimensional image of a human organ from two or more transducer images. An important component of three-dimensional reconstruction is edge detection, whereby an image boundary is separated from background, for further processing. In this paper, a modification of the Canny Edge Detection approach is suggested to extract an image from a cluttered background. The resulting cleaned image can then be sent to the image matching, interpolation and inverse perspective transformation blocks to reconstruct the 3-D scene. A brief discussion of the stereo vision system that has been developed at the Mars Mission Research Center (MMRC) is also presented. Results of a version of Canny Edge Detection algorithm show promise as an accurate edge extractor which may be used in the edge-pixel based binocular stereo vision system.

  13. Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection

    Science.gov (United States)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2014-07-01

    Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.

  14. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  15. Detection of Keratoconus in Clinically and Algorithmically Topographically Normal Fellow Eyes Using Epithelial Thickness Analysis

    Science.gov (United States)

    Reinstein, Dan Z.; Archer, Timothy J.; Urs, Raksha; Gobbe, Marine; RoyChoudhury, Arindam; Silverman, Ronald H.

    2017-01-01

    PURPOSE To assess the effectiveness of a keratoconus-detection algorithm derived from Artemis very high-frequency (VHF) digital ultrasound (ArcScan Inc., Morrison, CO) epithelial thickness maps in the fellow eye from a series of patients with unilateral keratoconus. METHODS The study included 10 patients with moderate to advanced keratoconus in one eye but a clinically and algorithmically topographically normal fellow eye. VHF digital ultrasound epithelial thickness data were acquired and a previously developed classification model was applied for identification of keratoconus to the clinically normal fellow eyes. Pentacam (Oculus Optikgeräte, Wetzlar, Germany) Belin-Ambrósio Display (BAD) data (5 of 10 eyes), and Orbscan (Bausch & Lomb, Rochester, NY) SCORE data (9 of 10 eyes) were also evaluated. RESULTS Five of the 10 fellow eyes were classified as keratoconic by the VHF digital ultrasound epithelium model. Five of 9 fellow eyes were classified as keratoconic by the SCORE model. For the 5 fellow eyes with Pentacam and VHF digital ultrasound data, one was classified as keratoconic by the VHF digital ultrasound model, one (different) eye by a combined VHF digital ultrasound and Pentacam model, and none by BAD-D alone. CONCLUSIONS Under the assumption that keratoconus is a bilateral but asymmetric disease, half of the ‘normal’ fellow eyes could be found to have keratoconus using epithelial thickness maps. The Orbscan SCORE or the combination of topographic BAD criteria with epithelial maps did not perform better. PMID:26544561

  16. Novel Rock Detection Intelligence for Space Exploration Based on Non-Symbolic Algorithms and Concepts

    Science.gov (United States)

    Yildirim, Sule; Beachell, Ronald L.; Veflingstad, Henning

    2007-01-01

    Future space exploration can utilize artificial intelligence as an integral part of next generation space rover technology to make the rovers more autonomous in performing mission objectives. The main advantage of the increased autonomy through a higher degree of intelligence is that it allows for greater utilization of rover resources by reducing the frequency of time consuming communications between rover and earth. In this paper, we propose a space exploration application of our research on a non-symbolic algorithm and concepts model. This model is based on one of the most recent approaches of cognitive science and artificial intelligence research, a parallel distributed processing approach. We use the Mars rovers. Sprit and Opportunity, as a starting point for proposing what rovers in the future could do if the presented model of non-symbolic algorithms and concepts is embedded in a future space rover. The chosen space exploration application for this paper, novel rock detection, is only one of many potential space exploration applications which can be optimized (through reduction of the frequency of rover-earth communications. collection and transmission of only data that is distinctive/novel) through the use of artificial intelligence technology compared to existing approaches.

  17. Desigining of Computer Vision Algorithm to Detect Sweet Pepper for Robotic Harvesting Under Natural Light

    Directory of Open Access Journals (Sweden)

    A Moghimi

    2015-03-01

    Full Text Available In recent years, automation in agricultural field has attracted more attention of researchers and greenhouse producers. The main reasons are to reduce the cost including labor cost and to reduce the hard working conditions in greenhouse. In present research, a vision system of harvesting robot was developed for recognition of green sweet pepper on plant under natural light. The major challenge of this study was noticeable color similarity between sweet pepper and plant leaves. To overcome this challenge, a new texture index based on edge density approximation (EDA has been defined and utilized in combination with color indices such as Hue, Saturation and excessive green index (EGI. Fifty images were captured from fifty sweet pepper plants to evaluate the algorithm. The algorithm could recognize 92 out of 107 (i. e., the detection accuracy of 86% sweet peppers located within the workspace of robot. The error of system in recognition of background, mostly leaves, as a green sweet pepper, decreased 92.98% by using the new defined texture index in comparison with color analysis. This showed the importance of integration of texture with color features when used for recognizing sweet peppers. The main reasons of errors, besides color similarity, were waxy and rough surface of sweet pepper that cause higher reflectance and non-uniform lighting on surface, respectively.

  18. Detecting outlying studies in meta-regression models using a forward search algorithm.

    Science.gov (United States)

    Mavridis, Dimitris; Moustaki, Irini; Wall, Melanie; Salanti, Georgia

    2017-06-01

    When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. A research about breast cancer detection using different neural networks and K-MICA algorithm.

    Science.gov (United States)

    Kalteh, A A; Zarbakhsh, Payam; Jirabadi, Meysam; Addeh, Jalil

    2013-01-01

    Breast cancer is the second leading cause of death for women all over the world. The correct diagnosis of breast cancer is one of the major problems in the medical field. From the literature it has been found that different pattern recognition techniques can help them to improve in this domain. This paper presents a novel hybrid intelligent method for detection of breast cancer. The proposed method includes two main modules: Clustering module and the classifier module. In the clustering module, first the input data will be clustered by a new technique. This technique is a suitable combination of the modified imperialist competitive algorithm (MICA) and K-means algorithm. Then the Euclidean distance of each pattern is computed from the determined clusters. The classifier module determines the membership of the patterns using the computed distance. In this module, several neural networks, such as the multilayer perceptron, probabilistic neural networks and the radial basis function neural networks are investigated. Using the experimental study, we choose the best classifier in order to recognize the breast cancer. The proposed system is tested on Wisconsin Breast Cancer (WBC) database and the simulation results show that the recommended system has high accuracy.

  20. Decision tree algorithm for detection of spatial processes in landscape transformation.

    Science.gov (United States)

    Bogaert, Jan; Ceulemans, Reinhart; Salvador-Van Eysenrode, David

    2004-01-01

    The conversion of landscapes by human activities results in widespread changes in landscape spatial structure. Regardless of the type of land conversion, there appears to be a limited number of common spatial configurations that result from such land transformation processes. Some of these configurations are considered optimal or more desirable than others. Based on pattern geometry, we define ten processes responsible for pattern change: aggregation, attrition, creation, deformation, dissection, enlargement, fragmentation, perforation, shift, and shrinkage. A novelty in this contribution is the inclusion of transformation processes causing expansion of the land cover of interest. Consequently, we propose a decision tree algorithm that enables detection of these processes, based on three parameters that have to be determined before and after the transformation of the landscape: area, perimeter length, and number of patches of the focal landscape class. As an example, the decision tree algorithm is applied to determine the transformation processes of three divergent land cover change scenarios: deciduous woodland degradation in Cadiz Township (Wisconsin, USA) 1831-1950, canopy gap formation in a terra firme rain forest at the Tiputini Biodiversity Station (Amazonian Ecuador) 1997-1998, and forest regrowth in Petersham Township (Massachusetts, USA) 1830-1985. The examples signal the importance of the temporal resolution of the data, since long-term pattern conversions can be subdivided in stadia in which particular pattern components are altered by specific transformation processes.