The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...
Nikolaev, Andrey B.; Sapego, Yuliya S.; Jakubovich, Anatolij N.; Berner, Leonid I.; Stroganov, Victor Yu.
In the paper it's proposed an algorithm for the management of traffic incidents, aimed at minimizing the impact of incidents on the road traffic in general. The proposed algorithm is based on the theory of fuzzy sets and provides identification of accidents, as well as the adoption of appropriate measures to address them as soon as possible. A…
Nizamani, Sarwat; Memon, Nasrullah
The paper presents the experiments to detect terrorism incidence type from news summary data. We have applied classification techniques on news summary data to analyze the incidence and detect the type of incidence. A number of experiments are conducted using various classification algorithms...... and results show that a simple decision tree classifier can learn incidence type with satisfactory results from news data....
Bogoev, Dimitar; Karam, Arzé
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
The US Department of Transportation (US-DOT) estimates that over half of all congestion : events are caused by highway incidents rather than by rush-hour traffic in big cities. Real-time : incident detection on freeways is an important part of any mo...
Solimene, Raffaele; Leone, Giovanni; Dell’Aversano, Angela
The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios. (paper)
A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.
Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela
The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.
Full Text Available Deadlock detection is a challenging issue in the analysis and design of on-chip networks. We have designed an algorithm to detect deadlocks automatically in on-chip networks with wormhole switching. The algorithm has been specified and proven correct in ACL2. To enable a top-down proof methodology, some parts of the algorithm have been left unimplemented. For these parts, the ACL2 specification contains constrained functions introduced with defun-sk. We used single-threaded objects to represent the data structures used by the algorithm. In this paper, we present details on the proof of correctness of the algorithm. The process of formal verification was crucial to get the algorithm flawless. Our ultimate objective is to have an efficient executable, and formally proven correct implementation of the algorithm running in ACL2.
Vinod Chandra, S. S.
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock
In this paper, we report on experiments to detect illegitimate emails using boosting algorithm. We call an email illegitimate if it is not useful for the receiver or for the society. We have divided the problem into two major areas of illegitimate email detection: suspicious email detection and s...
Gonzaga, Adilson; Franca, Celso Aparecido de
Edge detecting techniques applied to radiographic digital images are discussed. Some algorithms have been implemented and the results are displayed to enhance boundary or hide details. An algorithm applied in a pre processed image with contrast enhanced is proposed and the results are discussed
Verbeek, Freek; Schmaltz, Julien
Deadlock detection is a challenging issue in the analysis and design of on-chip networks. We have designed an algorithm to detect deadlocks automatically in on-chip networks with wormhole switching. The algorithm has been specified and proven correct in ACL2. To enable a top-down proof methodology, some parts of the algorithm have been left unimplemented. For these parts, the ACL2 specification contains constrained functions introduced with defun-sk. We used single-threaded objects to represe...
Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective......: to show whether medical signal processing of EMG data is feasible for detection of epileptic seizures. Methods: EMG signals during generalised seizures were recorded from 3 patients (with 20 seizures in total). Two possible medical signal processing algorithms were tested. The first algorithm was based...... the frequency-based algorithm was efficient for detecting the seizures in the third patient. Conclusion: Our results suggest that EMG signals could be used to develop an automatic seizuredetection system. However, different patients might require different types of algorithms /approaches....
Siegrist, David; Pavlin, J
Early detection of disease outbreaks by a medical biosurveillance system relies on two major components: 1) the contribution of early and reliable data sources and 2) the sensitivity, specificity, and timeliness of biosurveillance detection algorithms. This paper describes an effort to assess leading detection algorithms by arranging a common challenge problem and providing a common data set. The objectives of this study were to determine whether automated detection algorithms can reliably and quickly identify the onset of natural disease outbreaks that are surrogates for possible terrorist pathogen releases, and do so at acceptable false-alert rates (e.g., once every 2-6 weeks). Historic de-identified data were obtained from five metropolitan areas over 23 months; these data included International Classification of Diseases, Ninth Revision (ICD-9) codes related to respiratory and gastrointestinal illness syndromes. An outbreak detection group identified and labeled two natural disease outbreaks in these data and provided them to analysts for training of detection algorithms. All outbreaks in the remaining test data were identified but not revealed to the detection groups until after their analyses. The algorithms established a probability of outbreak for each day's counts. The probability of outbreak was assessed as an "actual" alert for different false-alert rates. The best algorithms were able to detect all of the outbreaks at false-alert rates of one every 2-6 weeks. They were often able to detect for the same day human investigators had identified as the true start of the outbreak. Because minimal data exists for an actual biologic attack, determining how quickly an algorithm might detect such an attack is difficult. However, application of these algorithms in combination with other data-analysis methods to historic outbreak data indicates that biosurveillance techniques for analyzing syndrome counts can rapidly detect seasonal respiratory and gastrointestinal
Full Text Available Accurate detection towards the corners plays an important part in camera calibration. To deal with the instability and inaccuracies of present corner detection algorithm, the nearest neighbour corners match-ing detection algorithms was brought forward. First, it dilates the binary image of the photographed pictures, searches and reserves quadrilateral outline of the image. Second, the blocks which accord with chess-board-corners are classified into a class. If too many blocks in class, it will be deleted; if not, it will be added, and then let the midpoint of the two vertex coordinates be the rough position of corner. At last, it precisely locates the position of the corners. The Experimental results have shown that the algorithm has obvious advantages on accuracy and validity in corner detection, and it can give security for camera calibration in traffic accident measurement.
Full Text Available Background: Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. Methods: We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, especially with regard to nuclear boundaries, but can be produced much easier and in greater quantities. The approaches use different automated sample extraction methods to derive image positions and class labels from nucleus center markers. In addition, the approaches use different automated sample selection methods to improve the detection quality of the classification algorithm and reduce the run time of the training process. We evaluated the approaches based on a previously published generic nuclei detection algorithm and a set of Ki-67-stained breast cancer images. Results: A Voronoi tessellation-based sample extraction method produced the best performing training sets. However, subsampling of the extracted training samples was crucial. Even simple class balancing improved the detection quality considerably. The incorporation of active learning led to a further increase in detection quality. Conclusions: With appropriate sample extraction and selection methods, nuclei detection algorithms trained on the basis of simple center marker annotations can produce comparable quality to algorithms trained on conventionally created training sets.
Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining
This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.
Fall incidents are considered as the leading cause of disability and even mortality among older adults. To address this problem, fall detection and prevention fields receive a lot of intention over the past years and attracted many researcher efforts. We present in the current study an overall performance comparison between fall detection systems using the most popular machine learning approaches which are: Naïve Bayes, K nearest neighbor, neural network, and support vector machine. The analysis of the classification power associated to these most widely utilized algorithms is conducted on two fall detection databases namely FDD and URFD. Since the performance of the classification algorithm is inherently dependent on the features, we extracted and used the same features for all classifiers. The classification evaluation is conducted using different state of the art statistical measures such as the overall accuracy, the F-measure coefficient, and the area under ROC curve (AUC) value.
Full Text Available Abstract Background Serologic testing algorithms for recent HIV seroconversion (STARHS provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident ( Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and
Yamanaka, Shogo; Ohzeki, Masayuki; Decelle, Aurélien
We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects.
Ristić Jovan D.
Full Text Available Analogue (and addressable fire detection systems enables a new quality in improving sensitivity to real fires and reducing susceptibility to nuisance alarm sources. Different decision algorithms types were developed with intention to improve sensitivity and reduce false alarm occurrence. At the beginning, it was free alarm level adjustment based on preset level. Majority of multi-criteria decision work was based on multi-sensor (multi-signature decision algorithms - using different type of sensors on the same location or, rather, using different aspects (level and rise of one sensor measured value. Our idea is to improve sensitivity and reduce false alarm occurrence by forming groups of sensors that work in similar conditions (same world side in the building, same or similar technology or working time. Original multi-criteria decision algorithms based on level, rise and difference of level and rise from group average are discussed in this paper.
Full Text Available This work implements two anomaly detection algorithms for detecting Transmission Control Protocol Synchronized (TCP SYN) flooding attack. The two algorithms are an adaptive threshold algorithm and a cumulative sum (CUSUM) based algorithm...
Hordijk, Wim; Smith, Joshua I; Steel, Mike
Autocatalytic sets are considered to be fundamental to the origin of life. Prior theoretical and computational work on the existence and properties of these sets has relied on a fast algorithm for detectingself-sustaining autocatalytic sets in chemical reaction systems. Here, we introduce and apply a modified version and several extensions of the basic algorithm: (i) a modification aimed at reducing the number of calls to the computationally most expensive part of the algorithm, (ii) the application of a previously introduced extension of the basic algorithm to sample the smallest possible autocatalytic sets within a reaction network, and the application of a statistical test which provides a probable lower bound on the number of such smallest sets, (iii) the introduction and application of another extension of the basic algorithm to detect autocatalytic sets in a reaction system where molecules can also inhibit (as well as catalyse) reactions, (iv) a further, more abstract, extension of the theory behind searching for autocatalytic sets. (i) The modified algorithm outperforms the original one in the number of calls to the computationally most expensive procedure, which, in some cases also leads to a significant improvement in overall running time, (ii) our statistical test provides strong support for the existence of very large numbers (even millions) of minimal autocatalytic sets in a well-studied polymer model, where these minimal sets share about half of their reactions on average, (iii) "uninhibited" autocatalytic sets can be found in reaction systems that allow inhibition, but their number and sizes depend on the level of inhibition relative to the level of catalysis. (i) Improvements in the overall running time when searching for autocatalytic sets can potentially be obtained by using a modified version of the algorithm, (ii) the existence of large numbers of minimal autocatalytic sets can have important consequences for the possible evolvability of
Full Text Available This study develops a tree augmented naive Bayesian (TAN classifier based incident detection algorithm. Compared with the Bayesian networks based detection algorithms developed in the previous studies, this algorithm has less dependency on experts’ knowledge. The structure of TAN classifier for incident detection is learned from data. The discretization of continuous attributes is processed using an entropy-based method automatically. A simulation dataset on the section of the Ayer Rajah Expressway (AYE in Singapore is used to demonstrate the development of proposed algorithm, including wavelet denoising, normalization, entropy-based discretization, and structure learning. The performance of TAN based algorithm is evaluated compared with the previous developed Bayesian network (BN based and multilayer feed forward (MLF neural networks based algorithms with the same AYE data. The experiment results show that the TAN based algorithms perform better than the BN classifiers and have a similar performance to the MLF based algorithm. However, TAN based algorithm would have wider vista of applications because the theory of TAN classifiers is much less complicated than MLF. It should be found from the experiment that the TAN classifier based algorithm has a significant superiority over the speed of model training and calibration compared with MLF.
M. A. Gunen
Full Text Available In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.
Full Text Available It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system’s emergency response capabilities, alleviate the cyber attacks’ damage, and strengthen the system’s counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system’s plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks’ topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
The concept of statistical anomalies, or outliers, has fascinated experimentalists since the earliest attempts to interpret data. We want to know why some data points don’t seem to belong with the others: perhaps we want to eliminate spurious or unrepresentative data from our model. Or, the anomalies themselves may be what we are interested in: an outlier could represent the symptom of a disease, an attack on a computer network, a scientific discovery, or even an unfaithful partner. We start with some general considerations, such as the relationship between clustering and anomaly detection, the choice between supervised and unsupervised methods, and the difference between global and local anomalies. Then we will survey the most representative anomaly detection algorithms, highlighting what kind of data each approach is best suited to, and discussing their limitations. We will finish with a discussion of the difficulties of anomaly detection in high-dimensional data and some new directions for anomaly detec...
The concept of statistical anomalies, or outliers, has fascinated experimentalists since the earliest attempts to interpret data. We want to know why some data points don’t seem to belong with the others: perhaps we want to eliminate spurious or unrepresentative data from our model. Or, the anomalies themselves may be what we are interested in: an outlier could represent the symptom of a disease, an attack on a computer network, a scientific discovery, or even an unfaithful partner. We start with some general considerations, such as the relationship between clustering and anomaly detection, the choice between supervised and unsupervised methods, and the difference between global and local anomalies. Then we will survey the most representative anomaly detection algorithms, highlighting what kind of data each approach is best suited to, and discussing their limitations. We will finish with a discussion of the difficulties of anomaly detection in high-dimensional data and some new directions for anomaly detec...
Haron, Nazleeni; Amir, Ruzaini; Aziz, Izzatdin A.; Jung, Low Tan; Shukri, Siti Rohkmah
In this paper, we present the design of parallel Sobel edge detection algorithm using Foster's methodology. The parallel algorithm is implemented using MPI message passing library and master/slave algorithm. Every processor performs the same sequential algorithm but on different part of the image. Experimental results conducted on Beowulf cluster are presented to demonstrate the performance of the parallel algorithm.
Full Text Available The networks of compromised and remotely controlled computers (bots are widely used in many Internet fraudulent activities, especially in the distributed denial of service attacks. Brute force gives enormous power to bot masters and makes botnet traffic visible; therefore, some countermeasures might be applied at early stages. Our study focuses on detecting botnet propagation via public websites. The provided algorithm might help with preventing from massive infections when popular web sites are compromised without spreading visual changes used for malware in botnets.Article in English
Leone, G.; Solimene, R.
In this contribution we consider the problem of detecting and localizing small cross section, with respect to the wavelength, scatterers from their scattered field once a known incident field interrogated the scene where they reside. A pertinent applicative context is rebar detection within concrete pillar. For such a case, scatterers to be detected are represented by rebars themselves or by voids due to their lacking. In both cases, as scatterers have point-like support, a subspace projection method can be conveniently exploited . However, as the field scattered by rebars is stronger than the one due to voids, it is expected that the latter can be difficult to be detected. In order to circumvent this problem, in this contribution we adopt a two-step MUltiple SIgnal Classification (MUSIC) detection algorithm. In particular, the first stage aims at detecting rebars. Once rebar are detected, their positions are exploited to update the Green's function and then a further detection scheme is run to locate voids. However, in this second case, background medium encompasses also the rabars. The analysis is conducted numerically for a simplified two-dimensional scalar scattering geometry. More in detail, as is usual in MUSIC algorithm, a multi-view/multi-static single-frequency configuration is considered . Baratonia, G. Leone, R. Pierri, R. Solimene, "Fault Detection in Grid Scattering by a Time-Reversal MUSIC Approach," Porc. Of ICEAA 2011, Turin, 2011. E. A. Marengo, F. K. Gruber, "Subspace-Based Localization and Inverse Scattering of Multiply Scattering Point Targets," EURASIP Journal on Advances in Signal Processing, 2007, Article ID 17342, 16 pages (2007).
Andoohgin Shahri, Mona; Jazi, Mohammad Davarpanah; Borchardt, Glenn; Dadkhah, Mehdi
Invalid journals are recent challenges in the academic world and many researchers are unacquainted with the phenomenon. The number of victims appears to be accelerating. Researchers might be suspicious of predatory journals because they have unfamiliar names, but hijacked journals are imitations of well-known, reputable journals whose websites have been hijacked. Hijacked journals issue calls for papers via generally laudatory emails that delude researchers into paying exorbitant page charges for publication in a nonexistent journal. This paper presents a method for detecting hijacked journals by using a classification algorithm. The number of published articles exposing hijacked journals is limited and most of them use simple techniques that are limited to specific journals. Hence we needed to amass Internet addresses and pertinent data for analyzing this type of attack. We inspected the websites of 104 scientific journals by using a classification algorithm that used criteria common to reputable journals. We then prepared a decision tree that we used to test five journals we knew were authentic and five we knew were hijacked.
Bush Idoko John
Full Text Available The detection of skin colour has been a useful and renowned technique due to its wide range of application in both analyses based on diagnostic and human computer interactions. Various problems could be solved by simply providing an appropriate method for pixel-like skin parts. Presented in this study is a colour segmentation algorithm that works directly in RGB colour space without converting the colour space. Genfis function as used in this study formed the Sugeno fuzzy network and utilizing Fuzzy C-Mean (FCM clustering rule, clustered the data and for each cluster/class a rule is generated. Finally, corresponding output from data mapping of pseudo-polynomial is obtained from input dataset to the adaptive neuro fuzzy inference system (ANFIS.
Ganziy, Denis; Jespersen, O.; Rose, B.
We propose a novel dynamic gate algorithm (DGA) for fast and accurate peak detection. The algorithm uses threshold determined detection window and Center of gravity algorithm with bias compensation. We analyze the wavelength fit resolution of the DGA for different values of signal to noise ratio ...
Yoon, Hong Jun; Zheng, Bin; Sahiner, Berkman; Chakraborty, Dev P.
Computer-aided detection (CAD) has been attracting extensive research interest during the last two decades. It is recognized that the full potential of CAD can only be realized by improving the performance and robustness of CAD algorithms and this requires good evaluation methodology that would permit CAD designers to optimize their algorithms. Free-response receiver operating characteristic (FROC) curves are widely used to assess CAD performance, however, evaluation rarely proceeds beyond determination of lesion localization fraction (sensitivity) at an arbitrarily selected value of non-lesion localizations (false marks) per image. This work describes an FROC curve fitting procedure that uses a recent model of visual search that serves as a framework for the free-response task. A maximum likelihood procedure for estimating the parameters of the model from free-response data and fitting CAD generated FROC curves was implemented. Procedures were implemented to estimate two figures of merit and associated statistics such as 95% confidence intervals and goodness of fit. One of the figures of merit does not require the arbitrary specification of an operating point at which to evaluate CAD performance. For comparison a related method termed initial detection and candidate analysis (IDCA) was also implemented that is applicable when all suspicious regions are known, no matter how low the degree of suspicion (or confidence level). The two methods were tested on seven mammography CAD data sets and both yielded good-excellent fits. The search model approach has the advantage that it can potentially be applied to radiologist generated free-response data where not all suspicious regions are reported, only the ones that are deemed sufficiently suspicious to warrant clinical follow-up. This work represents the first practical application of the search model to an important evaluation problem in diagnostic radiology. Software based on this work is expected to benefit CAD
Anomaly detection can provide clues about an outlying minority class in your data: hackers in a set of network events, fraudsters in a set of credit card transactions, or exotic particles in a set of high-energy collisions. In this talk, we analyze a real dataset of breast tissue biopsies, with malignant results forming the minority class. The "Isolation Forest" algorithm finds anomalies by deliberately “overfitting” models that memorize each data point. Since outliers have more empty space around them, they take fewer steps to memorize. Intuitively, a house in the country can be identified simply as “that house out by the farm”, while a house in the city needs a longer description like “that house in Brooklyn, near Prospect Park, on Union Street, between the firehouse and the library, not far from the French restaurant”. We first use anomaly detection to find outliers in the biopsy data, then apply traditional predictive modeling to discover rules that separate anomalies from normal data...
Steenbruggen, John; Tranos, Emmanouil; Rietveld, P.
This paper proves that mobile phone usage data is an easy to use, cheap and most importantly, reliable predictor of motorway incidents. Using econometric modelling, this paper provides a proof of concept of how mobile phone usage data can be utilised to detect motorway incidents. Greater Amsterdam
Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang
Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.
Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K
We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.
Mohr, D. P.; Knapek, C. A.; Huber, P.; Zaehringer, E.
The micrometer-sized particles in a complex plasma can be directly visualized and recorded by digital video cameras. To analyze the dynamics of single particles, reliable algorithms are required to accurately determine their positions to sub-pixel accuracy from the recorded images. Here, we combine the algorithms with common techniques for image processing, and we study several algorithms, pre- and post-processing methods, and the impact of the choice of threshold parameters.
brightness temperature values at various incidence angles for a particular grid location. This algorithm is compared and contrasted with other algorithms present in the visibility domain of SMOS, as well as the spatial domain. Initial results indicate that the SMOS RFI detection algorithm in the angular domain has a higher sensitivity and lower false-alarm rate than algorithms developed in the other two domains.
Full Text Available Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets.
Guo, Xuchao; Hao, Xia; Liu, Yaqiong; Zhang, Li; Wang, Lu
In order to further improve the efficiency and accuracy of community detection algorithm, a new algorithm named SSTCA (the community detection algorithm based on structural similarity with threshold) is proposed. In this algorithm, the structural similarities are taken as the weights of edges, and the threshold k is considered to remove multiple edges whose weights are less than the threshold, and improve the computational efficiency. Tests were done on the Zachary’s network, Dolphins’ social network and Football dataset by the proposed algorithm, and compared with GN and SSNCA algorithm. The results show that the new algorithm is superior to other algorithms in accuracy for the dense networks and the operating efficiency is improved obviously.
Full Text Available and estimation is vital for effective water service. For effective detection of background leakages, a hydraulic analysis of flow characteristics in water piping networks is indispensable for appraising such type of leakage. A leakage detection algorithm...
Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.
Narkawicz, Anthony; Munoz, Cesar
In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.
He, Lisha; Mao, Liangjing; Xie, Lijun
Infrared (IR) target detection is a key part of airborne infrared weapon system, especially the detection of poor dim moving IR target embedded in complex context. This paper presents an improved Dynamic Programming (DP) algorithm in allusion to low Signal to Noise Ratio (SNR) infrared dim moving targets under cluttered context. The algorithm brings the dim target to prominence by accumulating the energy of pixels in the image sequence, after suppressing the background noise with a mathematical morphology preprocessor. As considering the continuity and stabilization of target's energy and forward direction, this algorithm has well solved the energy scattering problem that exists in the original DP algorithm. An effective energy segmentation threshold is given by a Contrast-Limited Adaptive Histogram Equalization (CLAHE) filter with a regional peak extraction algorithm. Simulation results show that the improved DP tracking algorithm performs well in detecting poor dim targets.
Stotsky, Alexander A.
Many event detection mechanisms in spark ignition automotive engines are based on the comparison of the engine signals to the detection threshold values. Different signal qualities for new and aged engines necessitate the development of an adaptation algorithm for the detection thresholds...... remains constant regardless of engine age and changing detection threshold values. This, in turn, guarantees the same event detection performance for new and aged engines/sensors. Adaptation of the engine knock detection threshold is given as an example. Udgivelsesdato: 2008...
Zhao, Hong-dan; Liu, Guo-ying; Song, Xu
To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).
RAMOS, M. S.
Full Text Available This study shows the importance of genetic algorithms in the application of computational problems extremely difficult to resolve due to an impractically large number of solutions. The genetic algorithms - GA are based on nature to generate optimal solutions to difficult problems to be solved computationally in which a population of individuals is created and submitted to genetic operators: selection, crossover and mutation in order to generate a process similar to the evolution these natural reaching a satisfactory solution of the problem in question. An extremely interesting and complex problem is the cleavage of proteins, which either is to find rules that involve combinations of amino acid sequences of various proteins. This is a problem with many solutions, because the number of combinations position / amino acid is proportional to the factorial of the number of positions and amino acids. Following the guidelines of the theory of evolution is a family of algorithms used to solve problems. The structures are organized following an abstract model of data and the test is done with a sequence fictitious.
Memon, Nasrullah; Wiil, Uffe Kock; Qureshi, Pir Abdul Rasool
In this paper, we present algorithms for subgroup detection and demonstrated them with a real-time case study of USS Cole bombing terrorist network. The algorithms are demonstrated in an application by a prototype system. The system finds associations between terrorist and terrorist organisations...
Full Text Available Abstract Background The usefulness of syndromic surveillance for early outbreak detection depends in part on effective statistical aberration detection. However, few published studies have compared different detection algorithms on identical data. In the largest simulation study conducted to date, we compared the performance of six aberration detection algorithms on simulated outbreaks superimposed on authentic syndromic surveillance data. Methods We compared three control-chart-based statistics, two exponential weighted moving averages, and a generalized linear model. We simulated 310 unique outbreak signals, and added these to actual daily counts of four syndromes monitored by Public Health – Seattle and King County's syndromic surveillance system. We compared the sensitivity of the six algorithms at detecting these simulated outbreaks at a fixed alert rate of 0.01. Results Stratified by baseline or by outbreak distribution, duration, or size, the generalized linear model was more sensitive than the other algorithms and detected 54% (95% CI = 52%–56% of the simulated epidemics when run at an alert rate of 0.01. However, all of the algorithms had poor sensitivity, particularly for outbreaks that did not begin with a surge of cases. Conclusion When tested on county-level data aggregated across age groups, these algorithms often did not perform well in detecting signals other than large, rapid increases in case counts relative to baseline levels.
National Aeronautics and Space Administration — We present a set of novel algorithms which we call sequenceMiner that detect and characterize anomalies in large sets of high-dimensional symbol sequences that arise...
Varadan, V.; Janevski, A.; Kamalakaran, S.; Banerjee, N.; Harris, L.; Dimitrova, D.
The detection and quantification of fusion transcripts has both biological and clinical implications. RNA sequencing technology provides a means for unbiased and high resolution characterization of fusion transcript information in tissue samples. We evaluated two fusiondetection algorithms,
Goldman, Geoffrey H.; Wolfe, Owen
The U.S. Army is interested in developing low-cost, low-power, non-line-of-sight sensors for monitoring human activity. One modality that is often overlooked is active acoustics using sources of opportunity such as speech or music. Active acoustics can be used to detect human activity by generating acoustic images of an area at different times, then testing for changes among the imagery. A change detection algorithm was developed to detect physical changes in a building, such as a door changing positions or a large box being moved using acoustics sources of opportunity. The algorithm is based on cross correlating the acoustic signal measured from two microphones. The performance of the algorithm was shown using data generated with a hand-held FM radio as a sound source and two microphones. The algorithm could detect a door being opened in a hallway.
ARL-TR-8268 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Consecutive Mean Excision...not return it to the originator. ARL-TR-8268 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm...2018 2. REPORT TYPE Technical Report 3. DATES COVERED (From - To) 1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy
Anzelmi, Daniele; Carlone, Domenico; Rizzello, Fabio
Plagiarism is a complex problem and considered one of the biggest in publishing of scientific, engineering and other types of documents. Plagiarism has also increased with the widespread use of the Internet as large amount of digital data is available. Plagiarism is not just direct copy but also...... paraphrasing, rewording, adapting parts, missing references or wrong citations. This makes the problem more difficult to handle adequately. Plagiarism detection techniques are applied by making a distinction between natural and programming languages. Our proposed detection process is based on natural language...... document. Our plagiarism detection system, like many Information Retrieval systems, is evaluated with metrics of precision and recall....
Krishnapuram, Raghu; Frigui, Hichem; Nasraoui, Olfa
In this paper, we introduce a new fuzzy clustering algorithm to detect an unknown number of planar and quadric shapes in noisy data. The proposed algorithm is computationally and implementationally simple, and it overcomes many of the drawbacks of the existing algorithms that have been proposed for similar tasks. Since the clustering is performed in the original image space, and since no features need to be computed, this approach is particularly suited for sparse data. The algorithm may also be used in pattern recognition applications.
Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar
Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.
Geng, Xiao; Zhang, Yanmei; Jiao, Yuhang; Mei, Yinan
Microblog has the characteristics of large scale, various topics and too much topic-unrelated texts included. So we propose a three -layer hybrid clustering algorithm to replace the original ones used in the topic detection models which can hardly handle microblog. We apply the K-means algorithm in clustering the microblog texts by their topics in the first layer. And in the second layer, we use the agglomerative nesting algorithm to merge the small clusters consisting of texts of the same topic. The first two layers also remove most noise, reducing their further impact on the K-means in the third layer, which reassigns the texts assigned to the wrong cluster. Experiments show our algorithm outperforms some related traditional algorithms on the clustering of real dataset and functions perfectly in the topic detection.
Shi, Jie; Tang, YingJie; Chen, ShiBin
Errors such as wrong signature or upside down signature occur mostly during gathering in a bookbinding production line, and affect the quality of bookbinding. This paper presents a new algorithm for signature detection to detect these errors rapidly and accurately. The algorithm constructs scale space firstly by making use of pyramid method in morphology, then creates a region of interest by selecting a appropriate Pyramid image, extracts features from regions of interest, and make them matching templates, furthermore, filters the sample image and extracts the contour, finally selects the appropriate similarity coefficient for template matching, and obtain the matching results. This algorithm is implemented with MVtec Haclon software. Experiments show that the algorithm can anti-rotation, has strong robustness. The matching accuracy is 100%, meanwhile, the low time consumption of the algorithm can meet the demand of high-speed production.
Bilal, Saoud; Abdelouahab, Moussaoui
Evolutionary algorithms are very used today to resolve problems in many fields. There are few community detection methods in networks based on evolutionary algorithms. In our paper, we develop a new approach of community detection in networks based on evolutionary algorithm. In this approach we use an evolutionary algorithm to find the first community structure that maximizes the modularity. After that we improve the community structure through merging communities to find the final community structure that has the high value of modularity. We provide a general framework for implementing our approach. Compared with the state of art algorithms, simulation results on computer-generated and real world networks reflect the effectiveness of our approach.
Hu, Weiming; Hu, Wei; Maybank, Steve
Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Full Text Available This study presents an automated algorithm for fast pulse wave detection, directed to establishing the presence of cardiac activity in an emergency. The method relies on real-time estimation of similarity of closely positioned rising edges of the waveform and decision logic. The algorithm was tested on a set of pressure pulse waves from the MGH/MF waveform database from PhysioNet. Our approach to assessing the algorithm performance was based on location and classification of suspicious 10 s signal epochs by means of detection of dissimilar peak-to-peak intervals. The detected epochs were visually inspected and compared to the corresponding ECG-based expert beat annotations. The main epoch and error types were summarized. The performance of the algorithm and the visual interpretation of the results were illustrated by means of examples. The review of the recordings showed that the proposed algorithm correctly identifies cardiac pulsations even under considerable artefacts. Our conclusion is that the algorithm reliably detects critical periods in cardiac activity and is applicable to fast pulse wave detection in real-time applications and ambulatory measurement setups.
Saldivia, Sandra; Vicente, Benjamin; Marston, Louise; Melipillán, Roberto; Nazareth, Irwin; Bellón-Saameño, Juan; Xavier, Miguel; Maaroos, Heidi Ingrid; Svab, Igor; Geerlings, M-I; King, Michael
The reduction of major depression incidence is a public health challenge. To develop an algorithm to estimate the risk of occurrence of major depression in patients attending primary health centers (PHC). Prospective cohort study of a random sample of 2832 patients attending PHC centers in Concepción, Chile, with evaluations at baseline, six and twelve months. Thirty nine known risk factors for depression were measured to build a model, using a logistic regression. The algorithm was developed in 2,133 patients not depressed at baseline and compared with risk algorithms developed in a sample of 5,216 European primary care attenders. The main outcome was the incidence of major depression in the follow-up period. The cumulative incidence of depression during the 12 months follow up in Chile was 12%. Eight variables were identified. Four corresponded to the patient (gender, age, depression background and educational level) and four to patients' current situation (physical and mental health, satisfaction with their situation at home and satisfaction with the relationship with their partner). The C-Index, used to assess the discriminating power of the final model, was 0.746 (95% confidence intervals (CI = 0,707-0,785), slightly lower than the equation obtained in European (0.790 95% CI = 0.767-0.813) and Spanish attenders (0.82; 95% CI = 0.79-0.84). Four of the factors identified in the risk algorithm are not modifiable. The other two factors are directly associated with the primary support network (family and partner). This risk algorithm for the incidence of major depression provides a tool that can guide efforts towards design, implementation and evaluation of effectiveness of interventions to prevent major depression.
Jia Hou Chin
Full Text Available Community structure is considered one of the most interesting features in complex networks. Many real-world complex systems exhibit community structure, where individuals with similar properties form a community. The identification of communities in a network is important for understanding the structure of said network, in a specific perspective. Thus, community detection in complex networks gained immense interest over the last decade. A lot of community detection methods were proposed, and one of them is the label propagation algorithm (LPA. The simplicity and time efficiency of the LPA make it a popular community detection method. However, the LPA suffers from instability detection due to randomness that is induced in the algorithm. The focus of this paper is to improve the stability and accuracy of the LPA, while retaining its simplicity. Our proposed algorithm will first detect the main communities in a network by using the number of mutual neighbouring nodes. Subsequently, nodes are added into communities by using a constrained LPA. Those constraints are then gradually relaxed until all nodes are assigned into groups. In order to refine the quality of the detected communities, nodes in communities can be switched to another community or removed from their current communities at various stages of the algorithm. We evaluated our algorithm on three types of benchmark networks, namely the Lancichinetti-Fortunato-Radicchi (LFR, Relaxed Caveman (RC and Girvan-Newman (GN benchmarks. We also apply the present algorithm to some real-world networks of various sizes. The current results show some promising potential, of the proposed algorithm, in terms of detecting communities accurately. Furthermore, our constrained LPA has a robustness and stability that are significantly better than the simple LPA as it is able to yield deterministic results.
Full Text Available Aiming at the requirement of real-time signal detection in the passive surveillance system, a wideband array signal detection algorithm is proposed based on the concept of power focusing. By making use of the phase difference of the signal received by a uniform linear array, the algorithm makes the power of the received signal focused in the Direction Of Arrival (DOA with improved cascade FFT. Subsequently, the probability density function of the output noise at each angle is derived. Furthermore, a Constant False Alarm Rate (CFAR test statistic and the corresponding detection threshold are constructed. The theoretical probability of detection is also derived for different false alarm rate and Signal-to-Noise Ratio (SNR. The proposed algorithm is computationally efficient, and the detection process is independent of the prior information. Meanwhile, the results can act as the initial value for other algorithms with higher precision. Simulation results show that the proposed algorithm achieves good performance for weak signal detection.
Jiang, Yawen; Jia, Caiyan; Yu, Jian
Community detection is an important and crucial problem in complex network analysis. Although classical modularity function optimization approaches are widely used for identifying communities, the modularity function (Q) suffers from its resolution limit. Recently, the surprise function (S) was experimentally proved to be better than the Q function. However, up until now, there has been no algorithm available to perform searches to directly determine the maximal surprise values. In this paper, considering the superiority of the S function over the Q function, we propose an efficient community detection algorithm called AGSO (algorithm based on greedy surprise optimization) and its improved version FAGSO (fast-AGSO), which are based on greedy surprise optimization and do not suffer from the resolution limit. In addition, (F)AGSO does not need the number of communities K to be specified in advance. Tests on experimental networks show that (F)AGSO is able to detect optimal partitions in both simple and even more complex networks. Moreover, algorithms based on surprise maximization perform better than those algorithms based on modularity maximization, including Blondel–Guillaume–Lambiotte–Lefebvre (BGLL), Clauset–Newman–Moore (CNM) and the other state-of-the-art algorithms such as Infomap, order statistics local optimization method (OSLOM) and label propagation algorithm (LPA). (paper)
Hosseinmardi, Homa; Mattson, Sabrina Arredondo; Rafiq, Rahat Ibn; Han, Richard; Lv, Qin; Mishra, Shivakant
Cyberbullying is a growing problem affecting more than half of all American teens. The main goal of this paper is to investigate fundamentally new approaches to understand and automatically detect incidents of cyberbullying over images in Instagram, a media-based mobile social network. To this end, we have collected a sample Instagram data set consisting of images and their associated comments, and designed a labeling study for cyberbullying as well as image content using human labelers at th...
a BLAST search of all these sequences against a database containing sequences of a host genome (e.g. human genome) will take enormous amount of time and computing resources. In this article, we present a novel alignment-free algorithm, called Eu-Detect, that can detect eukaryotic sequences in metagenomic data ...
Fang, X H; Xiong, W; Hu, B J; Wang, L T
This paper designed a new algorithm of moving object detection for the aim of quick moving object detection and orientation, which used a pixel and its neighbors as an image vector to represent that pixel and modeled different chrominance component pixel as a mixture of Gaussians, and set up different mixture model of Gauss for different YUV chrominance components. In order to make full use of the spatial information, color segmentation and background model were combined. Simulation results show that the algorithm can detect intact moving objects even when the foreground has low contrast with background
Eastwood, Sophie V; Mathur, Rohini; Atkinson, Mark; Brophy, Sinead; Sudlow, Cathie; Flaig, Robin; de Lusignan, Simon; Allen, Naomi; Chaturvedi, Nishi
UK Biobank is a UK-wide cohort of 502,655 people aged 40-69, recruited from National Health Service registrants between 2006-10, with healthcare data linkage. Type 2 diabetes is a key exposure and outcome. We developed algorithms to define prevalent and incident diabetes for UK Biobank. The algorithms will be implemented by UK Biobank and their results made available to researchers on request. We used UK Biobank self-reported medical history and medication to assign prevalent diabetes and type, and tested this against linked primary and secondary care data in Welsh UK Biobank participants. Additionally, we derived and tested algorithms for incident diabetes using linked primary and secondary care data in the English Clinical Practice Research Datalink, and ran these on secondary care data in UK Biobank. For prevalent diabetes, 0.001% and 0.002% of people classified as "diabetes unlikely" in UK Biobank had evidence of diabetes in their primary or secondary care record respectively. Of those classified as "probable" type 2 diabetes, 75% and 96% had specific type 2 diabetes codes in their primary and secondary care records. For incidence, 95% of people with the type 2 diabetes-specific C10F Read code in primary care had corroborative evidence of diabetes from medications, blood testing or diabetes specific process of care codes. Only 41% of people identified with type 2 diabetes in primary care had secondary care evidence of type 2 diabetes. In contrast, of incident cases using ICD-10 type 2 diabetes specific codes in secondary care, 77% had corroborative evidence of diabetes in primary care. We suggest our definition of prevalent diabetes from UK Biobank baseline data has external validity, and recommend that specific primary care Read codes should be used for incident diabetes to ensure precision. Secondary care data should be used for incident diabetes with caution, as around half of all cases are missed, and a quarter have no corroborative evidence of diabetes in
Ben-David, Avishai; Davidson, Charles E.; Vanderbeek, Richard G.
A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t1 to t2" is addressed, and for range anomaly where the question "is a target present at time t within ranges R1 and R2" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO2 lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.
Kniaz, V. V.; Fedorenko, V. V.
The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.
Malhan, Khyati; Ibata, Rodrigo A.
We have designed a powerful new algorithm to detect stellar streams in an automated and systematic way. The algorithm, which we call the STREAMFINDER, is well suited for finding dynamically cold and thin stream structures that may lie along any simple or complex orbits in Galactic stellar surveys containing any combination of positional and kinematic information. In the present contribution we introduce the algorithm, lay out the ideas behind it, explain the methodology adopted to detect streams and detail its workings by running it on a suite of simulations of mock Galactic survey data of similar quality to that expected from the ESA/Gaia mission. We show that our algorithm is able to detect even ultra-faint stream features lying well below previous detection limits. Tests show that our algorithm will be able to detect distant halo stream structures >10° long containing as few as ˜15 members (ΣG ˜ 33.6 mag arcsec-2) in the Gaia dataset.
Full Text Available Vision-based multivehicle detection plays an important role in Forward Collision Warning Systems (FCWS and Blind Spot Detection Systems (BSDS. The performance of these systems depends on the real-time capability, accuracy, and robustness of vehicle detection methods. To improve the accuracy of vehicle detection algorithm, we propose a multifeature fusion vehicle detection algorithm based on Choquet integral. This algorithm divides the vehicle detection problem into two phases: feature similarity measure and multifeature fusion. In the feature similarity measure phase, we first propose a taillight-based vehicle detection method, and then vehicle taillight feature similarity measure is defined. Second, combining with the definition of Choquet integral, the vehicle symmetry similarity measure and the HOG + AdaBoost feature similarity measure are defined. Finally, these three features are fused together by Choquet integral. Being evaluated on public test collections and our own test images, the experimental results show that our method has achieved effective and robust multivehicle detection in complicated environments. Our method can not only improve the detection rate but also reduce the false alarm rate, which meets the engineering requirements of Advanced Driving Assistance Systems (ADAS.
Jebabli, Malek; Cherifi, Hocine; Cherifi, Chantal; Hamouda, Atef
Community structure is of paramount importance for the understanding of complex networks. Consequently, there is a tremendous effort in order to develop efficient community detection algorithms. Unfortunately, the issue of a fair assessment of these algorithms is a thriving open question. If the ground-truth community structure is available, various clustering-based metrics are used in order to compare it versus the one discovered by these algorithms. However, these metrics defined at the node level are fairly insensitive to the variation of the overall community structure. To overcome these limitations, we propose to exploit the topological features of the 'community graphs' (where the nodes are the communities and the links represent their interactions) in order to evaluate the algorithms. To illustrate our methodology, we conduct a comprehensive analysis of overlapping community detection algorithms using a set of real-world networks with known a priori community structure. Results provide a better perception of their relative performance as compared to classical metrics. Moreover, they show that more emphasis should be put on the topology of the community structure. We also investigate the relationship between the topological properties of the community structure and the alternative evaluation measures (quality metrics and clustering metrics). It appears clearly that they present different views of the community structure and that they must be combined in order to evaluate the effectiveness of community detection algorithms.
Mislis, D.; Bachelet, E.; Alsubai, K. A.; Bramich, D. M.; Parley, N.
We present the Signal Detection using Random-Forest Algorithm (SIDRA). SIDRA is a detection and classification algorithm based on the Machine Learning technique (Random Forest). The goal of this paper is to show the power of SIDRA for quick and accurate signal detection and classification. We first diagnose the power of the method with simulated light curves and try it on a subset of the Kepler space mission catalogue. We use five classes of simulated light curves (CONSTANT, TRANSIT, VARIABLE, MLENS and EB for constant light curves, transiting exoplanet, variable, microlensing events and eclipsing binaries, respectively) to analyse the power of the method. The algorithm uses four features in order to classify the light curves. The training sample contains 5000 light curves (1000 from each class) and 50 000 random light curves for testing. The total SIDRA success ratio is ≥90 per cent. Furthermore, the success ratio reaches 95-100 per cent for the CONSTANT, VARIABLE, EB and MLENS classes and 92 per cent for the TRANSIT class with a decision probability of 60 per cent. Because the TRANSIT class is the one which fails the most, we run a simultaneous fit using SIDRA and a Box Least Square (BLS)-based algorithm for searching for transiting exoplanets. As a result, our algorithm detects 7.5 per cent more planets than a classic BLS algorithm, with better results for lower signal-to-noise light curves. SIDRA succeeds to catch 98 per cent of the planet candidates in the Kepler sample and fails for 7 per cent of the false alarms subset. SIDRA promises to be useful for developing a detection algorithm and/or classifier for large photometric surveys such as TESS and PLATO exoplanet future space missions.
Plots depicting the classification accuracy of Eu-Detect with various combinations of. 'cumulative sequence count' (40K, 50K, 60K, 70K, 80K) and 'coverage threshold' (20%, 30%, 40%, 50%, 60%, 70%,. 80%). While blue bars represent Eu-Detect's average classification accuracy with eukaryotic data sets, red bars represent.
Supplementary figure 1. Plots depicting the classification accuracy of Eu-Detect with various combinations of. 'cumulative sequence count' (40K, 50K, 60K, 70K, 80K) and 'coverage threshold' (20%, 30%, 40%, 50%, 60%, 70%,. 80%). While blue bars represent Eu-Detect's average classification accuracy with eukaryotic ...
Full Text Available The Outlier Interval Detection is a crucial technique to analyze spacecraft fault, locate exception, and implement intelligent fault diagnosis system. The paper proposes two OID algorithms on astronautical Time Series Data, that is, variance based OID (VOID and FFT and k nearest Neighbour based OID (FKOID. The VOID algorithm divides TSD into many intervals and measures each interval’s outlier score according to its variance. This algorithm can detect the outlier intervals with great fluctuation in the time domain. It is a simple and fast algorithm with less time complexity, but it ignores the frequency information. The FKOID algorithm extracts the frequency information of each interval by means of Fast Fourier Transform, so as to calculate the distances between frequency features, and adopts the KNN method to measure the outlier score according to the sum of distances between the interval’s frequency vector and the K nearest frequency vectors. It detects the outlier intervals in a refined way at an appropriate expense of the time and is valid to detect the outlier intervals in both frequency and time domains.
Şehirli, E.; Turan, M. K.; Demiral, E.
Retina is one of the important layers of the eyes, which includes sensitive cells to colour and light and nerve fibers. Retina can be displayed by using some medical devices such as fundus camera, ophthalmoscope. Hence, some lesions like microaneurysm, haemorrhage, exudate with many diseases of the eye can be detected by looking at the images taken by devices. In computer vision and biomedical areas, studies to detect lesions of the eyes automatically have been done for a long time. In order to make automated detections, the concept of ROI may be utilized. ROI which stands for region of interest generally serves the purpose of focusing on particular targets. The main concentration of this paper is the algorithm to automatically detect retinal region of interest belonging to different retinal images on a software application. The algorithm consists of three stages such as pre-processing stage, detecting ROI on processed images and overlapping between input image and obtained ROI of the image.
Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi
The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.
Wang, Bin; Xu, Wenhai; Zhao, Ming; Wu, Houde
When searching for small targets at sea based on an infrared imaging system, irregular and random vibration of the airborne imaging platform causes intense interference for the pipeline-filtering, which is an algorithm that performs well in detecting small targets but is particularly sensitive to interframe vibrations of sequence images. This paper puts forward a pipeline-filtering algorithm that has a good performance on self-adaptive antivibration. In the block matching method that combines the normalized cross-correlations coefficient with the normalized mutual information, the interframe vibration of sequence images is acquired in real time and used to correct coordinates of the single-frame detection results, and then the corrected detection results are used to complete the pipeline-filtering. In addition, under severe sea conditions, small targets at sea may disappear transiently, leading to missing detection. This algorithm is also able to resolve this problem. Experimental results show that the algorithm can overcome the problem of interframe vibration of sequence images, thus realizing accurate detection of small maritime targets.
Huang, Adam; Li, Jiang; Summers, Ronald M.; Petrick, Nicholas; Hara, Amy K.
We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (palgorithm than by the one-step for 63% of all possible operating points. While operating at a suitable sensitivity level such as 90.8% (79/87) or 88.5% (77/87), the false positive rate was reduced by 24.4% (95% confidence intervals 17.9–31.0%) or 45.8% (95% confidence intervals 40.1–51.0%) respectively. We demonstrated that, with a proper experimental design, the Pareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms. PMID:20548966
Raz, Gil; Murphy, Cara; Georgan, Chelsea; Greenwood, Ross; Prasanth, R. K.; Myers, Travis; Goyal, Anish; Kelley, David; Wood, Derek; Kotidis, Petros
Algorithms for standoff detection and estimation of trace chemicals in hyperspectral images in the IR band are a key component for a variety of applications relevant to law-enforcement and the intelligence communities. Performance of these methods is impacted by the spectral signature variability due to presence of contaminants, surface roughness, nonlinear dependence on abundances as well as operational limitations on the compute platforms. In this work we provide a comparative performance and complexity analysis of several classes of algorithms as a function of noise levels, error distribution, scene complexity, and spatial degrees of freedom. The algorithm classes we analyze and test include adaptive cosine estimator (ACE and modifications to it), compressive/sparse methods, Bayesian estimation, and machine learning. We explicitly call out the conditions under which each algorithm class is optimal or near optimal as well as their built-in limitations and failure modes.
Yanagisawa, T.; Uetsuhara, M.; Banno, H.; Kurosaki, H.; Kinoshita, D.; Kitazawa, Y.; Hanada, T.
Four detection algorithms for GEO objects are being developed under the collaboration between Kyushu University, IHI corporation and JAXA. Each algorithm is designed to process CCD images to detect GEO objects. First one is PC based stacking method which has been developed in JAXA since 2000. Numerous CCD images are used to detect faint GEO objects below the limiting magnitude of a single CCD image. Sub-images are cropped from many CCD image to fit the movement of the objects. A median image of all the sub-images is then created. Although this method has an ability to detect faint objects, it takes time to analyze. Second one is the line-identifying technique which also uses many CCD frames and finds any series of objects that are arrayed on a straight line from the first frame to the last frame. This can analyze data faster than the stacking method, but cannot detect faint objects as the stacking method. Third one is the robust stacking method developed by IHI corporation which uses average instead of median to reduce analysis time. This has same analysis speed as the line-identifying technique and better detection capabilities in terms of the darkness. Forth one is the FPGA based stacking method which uses binalized images and a new algorithm installed in a FPGA board which reduce analysis time about one thousandth. All four algorithms analyzed the same sets of data to evaluate their advantages and disadvantages. By comparing their analysis times and results, an optimal usage of these algorithms are considered.
Full Text Available This paper presents a novel approach to detecting onsets in music audio files. We use a supervised learning algorithm to classify spectrogram frames extracted from digital audio as being onsets or nononsets. Frames classified as onsets are then treated with a simple peak-picking algorithm based on a moving average. We present two versions of this approach. The first version uses a single neural network classifier. The second version combines the predictions of several networks trained using different hyperparameters. We describe the details of the algorithm and summarize the performance of both variants on several datasets. We also examine our choice of hyperparameters by describing results of cross-validation experiments done on a custom dataset. We conclude that a supervised learning approach to note onset detection performs well and warrants further investigation.
Settings and Design: The proposed mass detector consists of two major steps. In the first step, several suspicious regions are extracted from the mammograms using an adaptive thresholding technique. In the second step, false positives originating by the previous stage are reduced by a machine learning approach. Materials and Methods: All modules of the mass detector were assessed on mini-MIAS database. In addition, the algorithm was tested on INBreast database for more validation. Results: According to FROC analysis, our mass detection algorithm outperforms other competing methods. Conclusions: We should not just insist on sensitivity in the segmentation phase because if we forgot FP rate, and our goal was just higher sensitivity, then the learning algorithm would be biased more toward false positives and the sensitivity would decrease dramatically in the false positive reduction phase. Therefore, we should consider the mass detection problem as a cost sensitive problem because misclassification costs are not the same in this type of problems.
Full Text Available The bistatic radar weak target detection problem is considered in this paper. An effective way to detect weak target is the long time integration. However, range migration (RM will occur due to the high speed. Without knowing the target motion parameters, a long time integration algorithm for bistatic radar is proposed in this paper. Firstly, the algorithm utilizes second-order keystone transform (SKT to remove range curvature. Then the quadratic phase term is compensated by the estimated acceleration. After that, SKT is used once more and the Doppler ambiguity phase term compensation is performed. At last, the target energy is integrated via FT. Simulations are provided to show the validity of the proposed algorithm in the end.
Zhan, Honglei; Zhao, Kun; Lü, Huibin; Jin, Kuijuan; Yang, Guozhen; Chen, Xiaohong
Analogous with scanning electron microscopy, we use an oblique-incidence reflectivity difference (OIRD) approach for morphology detection. By scanning the active carbon clusters in a one-dimensional way and the reservoir rocks in a two-dimensional way, the morphology of the samples' surface can be revealed in OIRD signal images. High OIRD signals of active carbon samples refer to the centralized distribution areas of carbon, and the fluctuations are caused by the uneven distribution of carbon pellets. OIRD intensity is proportional to the thickness of materials. In terms of rocks, the trough areas with smaller values refer to the low-lying fields. The areas with relatively large OIRD intensities correspond to the protuberance areas of rocks. Consequently, OIRD is a sensitive yet rapid measure of surface detection in material and petrogeology science.
Hsieh, Gillian; Bierman, Rob; Szabo, Linda; Lee, Alex Gia; Freeman, Donald E; Watson, Nathaniel; Sweet-Cordero, E Alejandro; Salzman, Julia
Gene fusions are known to play critical roles in tumor pathogenesis. Yet, sensitive and specific algorithms to detect gene fusions in cancer do not currently exist. In this paper, we present a new statistical algorithm, MACHETE (Mismatched Alignment CHimEra Tracking Engine), which achieves highly sensitive and specific detection of gene fusions from RNA-Seq data, including the highest Positive Predictive Value (PPV) compared to the current state-of-the-art, as assessed in simulated data. We show that the best performing published algorithms either find large numbers of fusions in negative control data or suffer from low sensitivity detecting known driving fusions in gold standard settings, such as EWSR1-FLI1. As proof of principle that MACHETE discovers novel gene fusions with high accuracy in vivo, we mined public data to discover and subsequently PCR validate novel gene fusions missed by other algorithms in the ovarian cancer cell line OVCAR3. These results highlight the gains in accuracy achieved by introducing statistical models into fusion detection, and pave the way for unbiased discovery of potentially driving and druggable gene fusions in primary tumors. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro
The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method  by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.
Full Text Available The past decade has been marked with a proliferation of community detection algorithms that aim to organize nodes (e.g., individuals, brain regions, variables into modular structures that indicate subgroups, clusters, or communities. Motivated by the emergence of big data across many fields of inquiry, these methodological developments have primarily focused on the detection of communities of nodes from matrices that are very large. However, it remains unknown if the algorithms can reliably detect communities in smaller graph sizes (i.e., 1000 nodes and fewer which are commonly used in brain research. More importantly, these algorithms have predominantly been tested only on binary or sparse count matrices and it remains unclear the degree to which the algorithms can recover community structure for different types of matrices, such as the often used cross-correlation matrices representing functional connectivity across predefined brain regions. Of the publicly available approaches for weighted graphs that can detect communities in graphs sizes of at least 1000, prior research has demonstrated that Newman’s spectral approach (i.e., Leading Eigenvalue, Walktrap, Fast Modularity, the Louvain method (i.e., multilevel community method, Label Propagation, and Infomap all recover communities exceptionally well in certain circumstances. The purpose of the present Monte Carlo simulation study is to test these methods across a large number of conditions, including varied graph sizes and types of matrix (sparse count, correlation, and reflected Euclidean distance, to identify which algorithm is optimal for specific types of data matrices. The results indicate that when the data are in the form of sparse count networks (such as those seen in diffusion tensor imaging, Label Propagation and Walktrap surfaced as the most reliable methods for community detection. For dense, weighted networks such as correlation matrices capturing functional connectivity
Yu, XiPeng; Chen, Zhong; Zhang, Shuo; Zhang, Ting
This paper presents a street rubbish detection algorithm based on image registration with Sift feature and RCNN. Firstly, obtain the rubbish region proposal on the real-time street image and set up the CNN convolution neural network trained by the rubbish samples set consists of rubbish and non-rubbish images; Secondly, for every clean street image, obtain the Sift feature and do image registration with the real-time street image to obtain the differential image, the differential image filters a lot of background information, obtain the rubbish region proposal rect where the rubbish may appear on the differential image by the selective search algorithm. Then, the CNN model is used to detect the image pixel data in each of the region proposal on the real-time street image. According to the output vector of the CNN, it is judged whether the rubbish is in the region proposal or not. If it is rubbish, the region proposal on the real-time street image is marked. This algorithm avoids the large number of false detection caused by the detection on the whole image because the CNN is used to identify the image only in the region proposal on the real-time street image that may appear rubbish. Different from the traditional object detection algorithm based on the region proposal, the region proposal is obtained on the differential image not whole real-time street image, and the number of the invalid region proposal is greatly reduced. The algorithm has the high mean average precision (mAP).
Lee, Chun-Liang; Lin, Guan-Yu; Chen, Yaw-Chung
Packet classification is essential for supporting advanced network services such as firewalls, quality-of-service (QoS), virtual private networks (VPN), and policy-based routing. The rules that routers use to classify packets are called packet filters. If two or more filters overlap, a conflict occurs and leads to ambiguity in packet classification. This study proposes an algorithm that can efficiently detect and resolve filter conflicts using tuple based search. The time complexity of the proposed algorithm is O(nW+s), and the space complexity is O(nW), where n is the number of filters, W is the number of bits in a header field, and s is the number of conflicts. This study uses the synthetic filter databases generated by ClassBench to evaluate the proposed algorithm. Simulation results show that the proposed algorithm can achieve better performance than existing conflict detection algorithms both in time and space, particularly for databases with large numbers of conflicts.
Full Text Available Abstract Background Microsatellites are short, tandemly-repeated DNA sequences which are widely distributed among genomes. Their structure, role and evolution can be analyzed based on exhaustive extraction from sequenced genomes. Several dedicated algorithms have been developed for this purpose. Here, we compared the detection efficiency of five of them (TRF, Mreps, Sputnik, STAR, and RepeatMasker. Results Our analysis was first conducted on the human X chromosome, and microsatellite distributions were characterized by microsatellite number, length, and divergence from a pure motif. The algorithms work with user-defined parameters, and we demonstrate that the parameter values chosen can strongly influence microsatellite distributions. The five algorithms were then compared by fixing parameters settings, and the analysis was extended to three other genomes (Saccharomyces cerevisiae, Neurospora crassa and Drosophila melanogaster spanning a wide range of size and structure. Significant differences for all characteristics of microsatellites were observed among algorithms, but not among genomes, for both perfect and imperfect microsatellites. Striking differences were detected for short microsatellites (below 20 bp, regardless of motif. Conclusion Since the algorithm used strongly influences empirical distributions, studies analyzing microsatellite evolution based on a comparison between empirical and theoretical size distributions should therefore be considered with caution. We also discuss why a typological definition of microsatellites limits our capacity to capture their genomic distributions.
Paul Jean Etienne Jeszensky
Full Text Available In this paper, the particles swarm optimization technique, recently published in the literature, and applied to Direct Sequence/Code Division Multiple Access systems (DS/CDMA with multiuser detection (MuD is analyzed, evaluated and compared. The Swarm algorithm efficiency when applied to the DS-CDMA multiuser detection (Swarm-MuD is compared through the tradeoff performance versus computational complexity, being the complexity expressed in terms of the number of necessary operations in order to reach the performance obtained through the optimum detector or the Maximum Likelihood detector (ML. The comparison is accomplished among the genetic algorithm, evolutionary programming with cloning and Swarm algorithm under the same simulation basis. Additionally, it is proposed an heuristics-MuD complexity analysis through the number of computational operations. Finally, an analysis is carried out for the input parameters of the Swarm algorithm in the attempt to find the optimum parameters (or almost-optimum for the algorithm applied to the MuD problem.
Full Text Available In big data era, the single detection techniques have already not met the demand of complex network attacks and advanced persistent threats, but there is no uniform standard to make different correlation analysis detection be performed efficiently and accurately. In this paper, we put forward a universal correlation analysis detection model and algorithm by introducing state transition diagram. Based on analyzing and comparing the current correlation detection modes, we formalize the correlation patterns and propose a framework according to data packet timing and behavior qualities and then design a new universal algorithm to implement the method. Finally, experiment, which sets up a lightweight intrusion detection system using KDD1999 dataset, shows that the correlation detection model and algorithm can improve the performance and guarantee high detection rates.
Gongzhang, Rui; Gachagan, Anthony
A range of materials used in industry exhibit scattering properties which limits ultrasonic NDE. Many algorithms have been proposed to enhance defect detection ability, such as the well-known Split Spectrum Processing (SSP) technique. Scattering noise usually cannot be fully removed and the remaining noise can be easily confused with real feature signals, hence becoming artefacts during the image interpretation stage. This paper presents an advanced algorithm to further reduce the influence of artefacts remaining in A-scan data after processing using a conventional defect detection algorithm. The raw A-scan data can be acquired from either traditional single transducer or phased array configurations. The proposed algorithm uses the concept of unsupervised machine learning to cluster segmental defect signals from pre-processed A-scans into different classes. The distinction and similarity between each class and the ensemble of randomly selected noise segments can be observed by applying a classification algorithm. Each class will then be labelled as `legitimate reflector' or `artefacts' based on this observation and the expected probability of defection (PoD) and probability of false alarm (PFA) determined. To facilitate data collection and validate the proposed algorithm, a 5MHz linear array transducer is used to collect A-scans from both austenitic steel and Inconel samples. Each pulse-echo A-scan is pre-processed using SSP and the subsequent application of the proposed clustering algorithm has provided an additional reduction to PFA while maintaining PoD for both samples compared with SSP results alone.
Wu, Wei; Tian, Weiye; Wang, Ding; Luo, Xin; Wu, Yingfei; Zhang, Yu
Image prominence is the most important region in an image, which can cause the visual attention and response of human beings. Preferentially allocating the computer resources for the image analysis and synthesis by the significant region is of great significance to improve the image area detecting. As a preprocessing of other disciplines in image processing field, the image prominence has widely applications in image retrieval and image segmentation. Among these applications, the super-pixel segmentation significance detection algorithm based on linear spectral clustering (LSC) has achieved good results. The significance detection algorithm proposed in this paper is better than the regional contrast ratio by replacing the method of regional formation in the latter with the linear spectral clustering image is super-pixel block. After combining with the latest depth learning method, the accuracy of the significant region detecting has a great promotion. At last, the superiority and feasibility of the super-pixel segmentation detection algorithm based on linear spectral clustering are proved by the comparative test.
Li, Liying; Zhou, Jianying; Xiao, Ning
Distributed Denial of Service (DDoS) attack poses a severe threat to the Internet. It is difficult to find the exact signature of attacking. Moreover, it is hard to distinguish the difference of an unusual high volume of traffic which is caused by the attack or occurs when a huge number of users occasionally access the target machine at the same time. The entropy detection method is an effective method to detect the DDoS attack. It is mainly used to calculate the distribution randomness of some attributes in the network packets' headers. In this paper, we focus on the detection technology of DDoS attack. We improve the previous entropy detection algorithm, and propose two enhanced detection methods based on cumulative entropy and time, respectively. Experiment results show that these methods could lead to more accurate and effective DDoS detection.
Doerr, Benjamin; Fischer, Paul; Hilbert, Astrid
Detecting structural breaks is an essential task for the statistical analysis of time series, for example, for fitting parametric models to it. In short, structural breaks are points in time at which the behaviour of the time series substantially changes. Typically, no solid background knowledge...... of the time series under consideration is available. Therefore, a black-box optimization approach is our method of choice for detecting structural breaks. We describe a genetic algorithm framework which easily adapts to a large number of statistical settings. To evaluate the usefulness of different crossover...... operator alone. Moreover, we present a specific fitness function which exploits the sparse structure of the break points and which can be evaluated particularly efficiently. The experiments on artificial and real-world time series show that the resulting algorithm detects break points with high precision...
Xianbo, Xiao; Guangshu, Hu
Traditional voice activity detection algorithms are mostly threshold-based or statistical model-based. All those methods are absent of the ability to react quickly to variations of environments. This paper describes an incremental SVM (Support Vector Machine) method for speech activity detection. The proposed incremental procedure makes it adaptive to variation of environments and the special construction of incremental training data set decreases computing consumption effectively. Experiments results demonstrated its higher end point detection accuracy. Further work will be focused on decreasing computing consumption and importing multi-class SVM classifiers.
Goss, Kelly Christine
Quantum Dots (QDs) are semiconductor nanocrystals that absorb light and re-emit at a wavelength dependent on its size and shape. A group of quantum dots can be designed to have a unique spectral emission by varying the size of the quantum dots (wavelength) and number of quantum dots (optical power) . This technology is refered to as Multiplexed Quantum Dots (MxQD) and when it was first proposed, MxQD tags were created with 6 optical power levels and one QD colour or 3 QD colours and 2 optical power levels. It was hypothesized that a realistic limit to the number of tags would be a system of 6 optical power levels and 6 QD colours resulting in 46655 unique tags. In recent work, the fabrication and detection of 9 unique tags  was demonstrated which is still far from the predicted capability of the technology. The limitations affecting the large number of unique tags are both the fabrication methods and the data detection algorithms used to read the spectral emissions. This thesis makes contributions toward improving the data detection algorithms for MxQD tags. To accomplish this, a communications system model is developed that includes the inteference between QD colours, Inter-Symbol Interference (ISI), and additive noise. The model is developed for the two optical detectors, namely a Charge-Coupled Device (CCD) spectrometer and photodiode detectors. The model also includes an analytical expression for the Signal-to-Noise Ratio (SNR) of the detectors. For the CCD spectrometer, this model is verified with an experimental prototype. With the models in place, communications systems tools are applied that overcome both ISI and noise. This is an improvement over previous work in the field that only considered algorithms to overcome the ISI or noise separately. Specifically, this thesis outlines the proposal of a matched filter to improve SNR, a Minimum Mean Square Error (MMSE) equalizer that mitigates ISI in the presence of noise and a Maximum Likelihood Sequence
Full Text Available Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.
Cuellar Martinez, A.; Espinosa Aranda, J.; Ramos Perez, S.; Ibarrola Alvarez, G.; Zavala Guerrero, M.; Sasmex
The importance of a rapid and reliable detection of an earthquake, allows taking advantage with more opportunity time of any possible opportunity warnings to the population. Thus detection algorithms in the sensing field station (FS) of an earthquake early earning system, must have a high rate of correct detection; this condition lets perform numerical processes to obtain appropriate parameters for the alert activation. During the evolution and continuous service of the Mexican Seismic Alert System (SASMEX) in more than 23 operation years, it has used various methodologies in the detection process to get the largest opportunity time when an earthquake occurs and it is alerted. In addition to the characteristics of the acceleration signal observed in sensing field stations, it is necessary the site conditions reducing urban noise, but sometimes it is not present through of the first operation years, however, urban growth near to FS cause urban noise, which should be tolerated while carrying out the relocation process of the station, and in the algorithm design should be contemplating the robustness to reduce possible errors and false detections. This work presents some results on detection algorithms used in Mexico for early warning systems for earthquakes considering recent events and different opportunity times obtained depending of the detections on P and S phases of the earthquake detected in the station. Some methodologies are reviewed and described in detail in this work and the main features implemented in The Seismic Alert System of Mexico City (SAS), in continuous operation since 1991, and the Seismic Alert System of Oaxaca City (SASO), today both comprise the SASMEX.
Wu, Ji; Zhang, Xiao-Lei
In this article, we present a new voice activity detection (VAD) algorithm that is based on statistical models and empirical rule-based energy detection algorithm. Specifically, it needs two steps to separate speech segments from background noise. For the first step, the VAD detects possible speech endpoints efficiently using the empirical rule-based energy detection algorithm. However, the possible endpoints are not accurate enough when the signal-to-noise ratio is low. Therefore, for the second step, we propose a new gaussian mixture model-based multiple-observation log likelihood ratio algorithm to align the endpoints to their optimal positions. Several experiments are conducted to evaluate the proposed VAD on both accuracy and efficiency. The results show that it could achieve better performance than the six referenced VADs in various noise scenarios.
Full Text Available Abstract In this article, we present a new voice activity detection (VAD algorithm that is based on statistical models and empirical rule-based energy detection algorithm. Specifically, it needs two steps to separate speech segments from background noise. For the first step, the VAD detects possible speech endpoints efficiently using the empirical rule-based energy detection algorithm. However, the possible endpoints are not accurate enough when the signal-to-noise ratio is low. Therefore, for the second step, we propose a new gaussian mixture model-based multiple-observation log likelihood ratio algorithm to align the endpoints to their optimal positions. Several experiments are conducted to evaluate the proposed VAD on both accuracy and efficiency. The results show that it could achieve better performance than the six referenced VADs in various noise scenarios.
Cordero, Jose; Garzon Reyes, Johnson
The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.
Full Text Available Cardiac disease is one of the major causes of death in the world. Early diagnose of the symptoms depends on abnormality on heart beat pattern, known as Arrhythmia. A novel fuzzy neuro generalized learning vector quantization for automatic Arrhythmia heart beat classification is proposed. The algorithm is an extension from theGLVQ algorithm that employs a fuzzy logic concept as the discriminant function in order to develop a robust algorithmand improve the classification performance. The algorithm is testedagainst MIT-BIH arrhythmia database to measure theperformance. Based on the experiment result, FN-GLVQ is able to increase the accuracy of GLVQ by a soft margin. As we intend to build a device with automated Arrhythmia detection,FN-GLVQ is then implemented into Field Gate Programmable Array to prototype the system into a real device.
Lee, Michael A; Corbisiero, Raffaele; Nabert, David R; Coman, James A; Giudici, Michael C; Tomassoni, Gery F; Turk, Kyong T; Breiter, David J; Zhang, Yunlong
Supraventricular tachycardia (SVT) has many characteristics that are similar to ventricular tachycardia (VT). This presents a significant challenge for the SVT-detection algorithms of an implantable cardioverter defibrillator (ICD). A newly developed ICD, which utilizes a Vector Timing and Correlation algorithm as well as interval-based conventional SVT discrimination algorithms (Rhythm ID), was evaluated in this study. This study was a prospective, multicenter trial that evaluated 96 patients implanted with an ICD at 21 U.S. centers. All patients were followed at 2 weeks, 1 month, and every 3 months post implant. A manual Rhythm ID reference vector was acquired prior to any arrhythmia induction. During testing, atrial tachyarrhythmias were induced first, followed by ventricular arrhythmia induction. Induced and spontaneous SVT and VT/ventricular fibrillation (VF) episodes recorded during the trial were annotated by physician investigators. The mean age of the patients implanted with an ICD was 67.3 +/- 10.8 years. Eighty-one percent of patients were male. The primary cardiovascular disease was coronary artery disease, and the primary tachyarrhythmia was monomorphic VT. Implementation of the Rhythm ID algorithm did not affect the VT/VF detection time. There were a total of 370 ventricular tachyarrhythmias (277 induced and 93 spontaneous) and 441 SVT episodes (168 induced and 273 spontaneous). Sensitivity for ventricular tachyarrhythmias was 100%, and specificity for SVT was 92% (94% and 91% for induced and spontaneous SVT, respectively). All patients had a successful manual Rhythm ID acquisition prior to atrial tachyarrhythmia induction. At the 1-month follow-up, the Rhythm ID references were updated automatically an average of 167.8 +/- 122.7 times. Stored Rhythm ID references correlated to patients' normally conducted rhythm 100% at 2 weeks, and 98% at 1 month. The Rhythm ID algorithm achieved 100% sensitivity for VT/VF, and 92% specificity for SVT. The manual
Full Text Available Stochastic Resonance is a phenomenon, studied and mainly exploited in telecommunication, which permits the amplification and detection of weak signals by the assistance of noise. The first papers on this technique are dated early 80 s and were developed to explain the periodically recurrent ice ages. Other applications mainly concern neuroscience, biology, medicine and obviously signal analysis and processing. Recently, some researchers have applied the technique for detecting faults in mechanical systems and bearings. In this paper, we try to better understand the conditions of applicability and which is the best algorithm to be adopted for these purposes. In fact, to get the methodology profitable and efficient to enhance the signal spikes due to fault in rings and balls/rollers of bearings, some parameters have to be properly selected. This is a problem since in system identification this procedure should be as blind as possible. Two algorithms are analysed: the first exploits classical SR with three parameters mutually dependent, while the other uses Woods-Saxon potential, with three parameters yet but holding a different meaning. The comparison of the performances of the two algorithms and the optimal choice of their parameters are the scopes of this paper. Algorithms are tested on simulated and experimental data showing an evident capacity of increasing the signal to noise ratio.
Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.
Wang, Chunhui; Qu, Yang; Tang, Yajun Pang Tiantian
In order to obtain better detection results of heterodyne, we used phase IQ quadrature demodulation algorithm to process the data which detected by laser heterodyne. Based on laser heterodyne interferometer, processing the data in the interferometer phase IQ quadrature demodulation algorithm from the signal to noise ratio, sampling rate, sampling rate, filter order and cutoff frequency, verify the effects of these system parameters to the phase precision, and choose the best parameters to obtain a better phase precision through experiment as: the signal to noise ratio is 25 dB, the IF signal frequency is 98.3 MHz, 98.5 MHz, 99.1 MHz, 99.5 MHz and 100 MHz, the sampling rate is 512-2048, the cutoff frequency and order of the filter are 0.11 and 40, respectively.
which purposefully and maliciously masquerade as ‘normal network behavior.’ Social insects in the natural world routinely need to make classification...thereby reduce the collateral damage to minimum. 4.1.4 Minimal and Marginal Deployment Gain. Deployment of networked services across administrative ...BIO-INSPIRED DISTRIBUTED DECISION ALGORITHMS FOR ANOMALY DETECTION RUTGERS UNIVERSITY MARCH 2017 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC
Hoffman, J.; Schmidt, C. C.; Brunner, J. C.; Prins, E. M.
The Wild Fire Automated Biomass Burning Algorithm (WF_ABBA), developed at the Cooperative Institute for Meteorological Satellite Studies (CIMSS), has a long legacy of operational wildfire detection and characterization. In recent years, applications of geostationary fire detection and characterization data have been expanding. Fires are detected with a contextual algorithm and when the fires meet certain conditions the instantaneous fire size, temperature, and radiative power are calculated and provided in user products. The WF_ABBA has been applied to data from Geostationary Operational Environmental Satellite (GOES)-8 through 15, Meteosat-8/-9, and Multifunction Transport Satellite (MTSAT)-1R/-2. WF_ABBA is also being developed for the upcoming platforms like GOES-R Advanced Baseline Imager (ABI) and other geostationary satellites. Development of the WF_ABBA for GOES-R ABI has focused on adapting the legacy algorithm to the new satellite system, enhancing its capabilities to take advantage of the improvements available from ABI, and addressing user needs. By its nature as a subpixel feature, observation of fire is extraordinarily sensitive to the characteristics of the sensor and this has been a fundamental part of the GOES-R WF_ABBA development work.
Full Text Available The aim of this paper is to demonstrate the feasibility of a live Automated Incident Detection (AID system using only Floating Car Data (FCD in one of the first large-scale FCD AID field trials. AID systems detect traffic events and alert upcoming drivers to improve traffic safety without human monitoring. These automated systems traditionally rely on traffic monitoring sensors embedded in the road. FCD allows for finer spatial granularity of traffic monitoring. However, low penetration rates of FCD probe vehicles and the data latency have historically hindered FCD AID deployment. We use a live country-wide FCD system monitoring an estimated 5.93% of all vehicles. An FCD AID system is presented and compared to the installed AID system (using loop sensor data on 2 different highways in Netherlands. Our results show the FCD AID can adequately monitor changing traffic conditions and follow the AID benchmark. The presented FCD AID is integrated with the road operator systems as part of an innovation project, making this, to the best of our knowledge, the first full chain technical feasibility trial of an FCD-only AID system. Additionally, FCD allows for AID on roads without installed sensors, allowing road safety improvements at low cost.
Novak, Avrey; Nyflot, Matthew J.; Ermoian, Ralph P.; Jordan, Loucille E.; Sponseller, Patricia A.; Kane, Gabrielle M.; Ford, Eric C.; Zeng, Jing
Purpose: Radiation treatment planning involves a complex workflow that has multiple potential points of vulnerability. This study utilizes an incident reporting system to identify the origination and detection points of near-miss errors, in order to guide their departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or applied a near-miss risk index (NMRI) to gauge severity. Methods: From 3/2012 to 3/2014, 1897 incidents were analyzed from a departmental incident learning system. All incidents were prospectively reviewed weekly by a multidisciplinary team and assigned a NMRI score ranging from 0 to 4 reflecting potential harm to the patient (no potential harm to potential critical harm). Incidents were classified by point of incident origination and detection based on a 103-step workflow. The individual steps were divided among nine broad workflow categories (patient assessment, imaging for radiation therapy (RT) planning, treatment planning, pretreatment plan review, treatment delivery, on-treatment quality management, post-treatment completion, equipment/software quality management, and other). The average NMRI scores of incidents originating or detected within each broad workflow area were calculated. Additionally, out of 103 individual process steps, 35 were classified as safety barriers, the process steps whose primary function is to catch errors. The safety barriers which most frequently detected incidents were identified and analyzed. Finally, the distance between event origination and detection was explored by grouping events by the number of broad workflow area events passed through before detection, and average NMRI scores were compared. Results: Near-miss incidents most commonly originated within treatment planning (33%). However, the incidents with the highest average NMRI scores originated during imaging for RT planning (NMRI = 2.0, average NMRI of all events = 1.5), specifically
Novak, Avrey; Nyflot, Matthew J.; Ermoian, Ralph P.; Jordan, Loucille E.; Sponseller, Patricia A.; Kane, Gabrielle M.; Ford, Eric C.; Zeng, Jing, E-mail: firstname.lastname@example.org [Department of Radiation Oncology, University of Washington Medical Center, 1959 NE Pacific Street, Campus Box 356043, Seattle, Washington 98195 (United States)
Purpose: Radiation treatment planning involves a complex workflow that has multiple potential points of vulnerability. This study utilizes an incident reporting system to identify the origination and detection points of near-miss errors, in order to guide their departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or applied a near-miss risk index (NMRI) to gauge severity. Methods: From 3/2012 to 3/2014, 1897 incidents were analyzed from a departmental incident learning system. All incidents were prospectively reviewed weekly by a multidisciplinary team and assigned a NMRI score ranging from 0 to 4 reflecting potential harm to the patient (no potential harm to potential critical harm). Incidents were classified by point of incident origination and detection based on a 103-step workflow. The individual steps were divided among nine broad workflow categories (patient assessment, imaging for radiation therapy (RT) planning, treatment planning, pretreatment plan review, treatment delivery, on-treatment quality management, post-treatment completion, equipment/software quality management, and other). The average NMRI scores of incidents originating or detected within each broad workflow area were calculated. Additionally, out of 103 individual process steps, 35 were classified as safety barriers, the process steps whose primary function is to catch errors. The safety barriers which most frequently detected incidents were identified and analyzed. Finally, the distance between event origination and detection was explored by grouping events by the number of broad workflow area events passed through before detection, and average NMRI scores were compared. Results: Near-miss incidents most commonly originated within treatment planning (33%). However, the incidents with the highest average NMRI scores originated during imaging for RT planning (NMRI = 2.0, average NMRI of all events = 1.5), specifically
Full Text Available Automated forward vehicle detection is an integral component of many advanced driver-assistance systems. The method based on multi-visual information fusion, with its exclusive advantages, has become one of the important topics in this research field. During the whole detection process, there are two key points that should to be resolved. One is to find the robust features for identification and the other is to apply an efficient algorithm for training the model designed with multi-information. This paper presents an adaptive SVM (Support Vector Machine model to detect vehicle with range estimation using an on-board camera. Due to the extrinsic factors such as shadows and illumination, we pay more attention to enhancing the system with several robust features extracted from a real driving environment. Then, with the introduction of an improved genetic algorithm, the features are fused efficiently by the proposed SVM model. In order to apply the model in the forward collision warning system, longitudinal distance information is provided simultaneously. The proposed method is successfully implemented on a test car and evaluation experimental results show reliability in terms of both the detection rate and potential effectiveness in a real-driving environment.
Astiqah Omar, Nor; Zakuan, Zeti Zuryani Mohd; Saian, Rizauddin
Internet enables information to be accessible anytime and anywhere. This scenario creates an environment whereby information can be easily copied. Easy access to the internet is one of the factors which contribute towards piracy in Malaysia as well as the rest of the world. According to a survey conducted by Compliance Gap BSA Global Software Survey in 2013 on software piracy, found out that 43 percent of the software installed on PCs around the world was not properly licensed, the commercial value of the unlicensed installations worldwide was reported to be 62.7 billion. Piracy can happen anywhere including universities. Malaysia as well as other countries in the world is faced with issues of piracy committed by the students in universities. Piracy in universities concern about acts of stealing intellectual property. It can be in the form of software piracy, music piracy, movies piracy and piracy of intellectual materials such as books, articles and journals. This scenario affected the owner of intellectual property as their property is in jeopardy. This study has developed a classification model for detecting software piracy. The model was developed using a swarm intelligence algorithm called the Ant Colony Optimization algorithm. The data for training was collected by a study conducted in Universiti Teknologi MARA (Perlis). Experimental results show that the model detection accuracy rate is better as compared to J48 algorithm.
Full Text Available Pathogens feed on fruits and vegetables causing great food losses or at least reduction of their shelf life. These pathogens can cause losses of the final product or in the farms were the products are grown, attacking leaves, stems and trees. This review analyses disease detection sensors and algorithms for both the farm and postharvest management of fruit and vegetable quality. Mango, avocado, apple, tomato, potato, citrus and grapes were selected as the fruits and vegetables for study due to their world-wide consumption. Disease warning systems for predicting pathogens and insects on farms during fruit and vegetable production are commonly used for all the crops and are available where meteorological stations are present. It can be seen that these disease risk systems are being slowly replaced by remote sensing monitoring in developed countries. Satellite images have reduced their temporal resolution, but are expensive and must become cheaper for their use world-wide. In the last 30 years, a lot of research has been carried out in non-destructive sensors for food quality. Actually, non-destructive technology has been applied for sorting high quality fruit which is desired by the consumer. The sensors require algorithms to work properly; the most used being discriminant analysis and training neural networks. New algorithms will be required due to the high quantity of data acquired and its processing, and for disease warning strategies for disease detection.
Willersrud, Anders; Imsland, Lars; Blanke, Mogens
Downhole incidents such as kick, lost circulation, pack-off, and hole cleaning issues are important contributors to downtime in drilling. In managed pressure drilling (MPD), operations margins are typically narrower, implying more frequent incidents and more severe consequences. Detection...
Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Ballard, Kathryn M.; Otero, Sharon D.; Barker, Glover D.
Two conflict detection and resolution (CD&R) algorithms for the terminal maneuvering area (TMA) were evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. One CD&R algorithm, developed at NASA, was designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The second algorithm, Enhanced Traffic Situation Awareness on the Airport Surface with Indications and Alerts (SURF IA), was designed to increase flight crew awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the aircraft-based CD&R algorithms during various runway, taxiway, and low altitude scenarios, multiple levels of CD&R system equipage, and various levels of horizontal position accuracy. Algorithm performance was assessed through various metrics including the collision rate, nuisance and missed alert rate, and alert toggling rate. The data suggests that, in general, alert toggling, nuisance and missed alerts, and unnecessary maneuvering occurred more frequently as the position accuracy was reduced. Collision avoidance was more effective when all of the aircraft were equipped with CD&R and maneuvered to avoid a collision after an alert was issued. In order to reduce the number of unwanted (nuisance) alerts when taxiing across a runway, a buffer is needed between the hold line and the alerting zone so alerts are not generated when an aircraft is behind the hold line. All of the results support RTCA horizontal position accuracy requirements for performing a CD&R function to reduce the likelihood and severity of runway incursions and collisions.
Full Text Available In modern radar systems, the clutter’s statistic characters are unknown. With this clutter, the capability of CFAR of parametric detection algorithms will decline. So nonparametric detection algorithms become very important. An intelligent nonparametric Generalized Sign (GS detection algorithm Variability Index-Generalized Sign (VI-GS based on adaptive threshold selection is proposed. The VI-GS detection algorithm comploys a composite approach based on the GS detection algorithm, the Trimmed GS detection algorithm (TGS and the Greatest Of GS detection algorithm (GO-GS. The performance of this detection algorithm in the nonhomogenous clutter background is analyzed respectively based on simulated Gaussian distributed clutter and real radar data. These results show that it performs robustly in the homogeneous background as well as the nonhomogeneous background.
Tel, G.; Mattern, F.
It is shown that the termination detection problem for distributed computations can be modelled as an instance of the garbage collection problem. Consequently, algorithms for the termination detection problem are obtained by applying transformations to garbage collection algorithms. The
El Tokhy, M.E.S.M.E.S.
The main functions of spectroscopy system are signal detection, filtering and amplification, pileup detection and recovery, dead time correction, amplitude analysis and energy spectrum analysis. Safeguards isotopic measurements require the best spectrometer systems with excellent resolution, stability, efficiency and throughput. However, the resolution and throughput, which depend mainly on the detector, amplifier and the analog-to-digital converter (ADC), can still be improved. These modules have been in continuous development and improvement. For this reason we are interested with both the development of quantum detectors and efficient algorithms of the digital processing measurement. Therefore, the main objective of this thesis is concentrated on both 1. Study quantum dot (QD) devices behaviors under gamma radiation 2. Development of efficient algorithms for handling problems of gamma-ray spectroscopy For gamma radiation detection, a detailed study of nanotechnology QD sources and infrared photodetectors (QDIP) for gamma radiation detection is introduced. There are two different types of quantum scintillator detectors, which dominate the area of ionizing radiation measurements. These detectors are QD scintillator detectors and QDIP scintillator detectors. By comparison with traditional systems, quantum systems have less mass, require less volume, and consume less power. These factors are increasing the need for efficient detector for gamma-ray applications such as gamma-ray spectroscopy. Consequently, the nanocomposite materials based on semiconductor quantum dots has potential for radiation detection via scintillation was demonstrated in the literature. Therefore, this thesis presents a theoretical analysis for the characteristics of QD sources and infrared photodetectors (QDIPs). A model of QD sources under incident gamma radiation detection is developed. A novel methodology is introduced to characterize the effect of gamma radiation on QD devices. The rate
Full Text Available In the urban object detection challenge organized by the ISPRS WG III/4 high geometric and radiometric resolution aerial images about Vaihingen/Stuttgart, Germany are distributed. The acquired data set contains optical false color, near infrared images and airborne laserscanning data. The presented research focused exclusively on the optical image, so the elevation information was ignored. The road detection procedure has been built up of two main phases: a segmentation done by neural networks and a compilation made by genetic algorithms. The applied neural networks were support vector machines with radial basis kernel function and self-organizing maps with hexagonal network topology and Euclidean distance function for neighborhood management. The neural techniques have been compared by hyperbox classifier, known from the statistical image classification practice. The compilation of the segmentation is realized by a novel application of the common genetic algorithm and by differential evolution technique. The genes were implemented to detect the road elements by evaluating a special binary fitness function. The results have proven that the evolutional technique can automatically find major road segments.
Full Text Available We propose a new algorithm for automatic detect violations of traffic rules for improving the people safety on the unregulated pedestrian crossing. The algorithm uses multi-step proceedings. They are zebra detection, cars detection, and pedestrian detection. For car detection, we use faster R-CNN deep learning tool. The algorithm shows promising results in the detection violations of traffic rules.
Full Text Available We present a search method based around the grouping of data residuals, suitable for the detection of many quasi-periodic signals. Combined with an efficient and easily implemented method to predict the maximum transit timing variations of a transiting circumbinary exoplanet, we form a fast search algorithm for such planets. We here target the Kepler dataset in particular, where all the transiting examples of circumbinary planets have been found to date. The method is presented and demonstrated on two known systems in the Kepler data.
Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad
Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect ® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0
Jose de Jesus Guerrero-Turrubiates
Full Text Available This paper presents a new method based on Estimation of Distribution Algorithms (EDAs to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications.
Britt, Charles L.; Bracalente, Emedio M.
The algorithms used in the NASA experimental wind shear radar system for detection, characterization, and determination of windshear hazard are discussed. The performance of the algorithms in the detection of wet microbursts near Orlando is presented. Various suggested algorithms that are currently being evaluated using the flight test results from Denver and Orlando are reviewed.
Tokars, Jerome I; Burkom, Howard; Xing, Jian; English, Roseanne; Bloom, Steven; Cox, Kenneth; Pavlin, Julie A
BioSense is a US national system that uses data from health information systems for automated disease surveillance. We studied 4 time-series algorithm modifications designed to improve sensitivity for detecting artificially added data. To test these modified algorithms, we used reports of daily syndrome visits from 308 Department of Defense (DoD) facilities and 340 hospital emergency departments (EDs). At a constant alert rate of 1%, sensitivity was improved for both datasets by using a minimum standard deviation (SD) of 1.0, a 14-28 day baseline duration for calculating mean and SD, and an adjustment for total clinic visits as a surrogate denominator. Stratifying baseline days into weekdays versus weekends to account for day-of-week effects increased sensitivity for the DoD data but not for the ED data. These enhanced methods may increase sensitivity without increasing the alert rate and may improve the ability to detect outbreaks by using automated surveillance system data.
Salamatova, T.; Zhukov, V.
The paper presents the application of the artificial immune systems apparatus as a heuristic method of network intrusion detection for algorithmic provision of intrusion detection systems. The coevolutionary immune algorithm of artificial immune systems with clonal selection was elaborated. In testing different datasets the empirical results of evaluation of the algorithm effectiveness were achieved. To identify the degree of efficiency the algorithm was compared with analogs. The fundamental rules based of solutions generated by this algorithm are described in the article.
Wu, Yi-Ta; Zhou Chuan; Chan, Heang-Ping; Paramagul, Chintana; Hadjiiski, Lubomir M.; Daly, Caroline Plowden; Douglas, Julie A.; Zhang Yiheng; Sahiner, Berkman; Shi Jiazheng; Wei Jun
Purpose: Automated detection of breast boundary is one of the fundamental steps for computer-aided analysis of mammograms. In this study, the authors developed a new dynamic multiple thresholding based breast boundary (MTBB) detection method for digitized mammograms. Methods: A large data set of 716 screen-film mammograms (442 CC view and 274 MLO view) obtained from consecutive cases of an Institutional Review Board approved project were used. An experienced breast radiologist manually traced the breast boundary on each digitized image using a graphical interface to provide a reference standard. The initial breast boundary (MTBB-Initial) was obtained by dynamically adapting the threshold to the gray level range in local regions of the breast periphery. The initial breast boundary was then refined by using gradient information from horizontal and vertical Sobel filtering to obtain the final breast boundary (MTBB-Final). The accuracy of the breast boundary detection algorithm was evaluated by comparison with the reference standard using three performance metrics: The Hausdorff distance (HDist), the average minimum Euclidean distance (AMinDist), and the area overlap measure (AOM). Results: In comparison with the authors' previously developed gradient-based breast boundary (GBB) algorithm, it was found that 68%, 85%, and 94% of images had HDist errors less than 6 pixels (4.8 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 89%, 90%, and 96% of images had AMinDist errors less than 1.5 pixels (1.2 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 96%, 98%, and 99% of images had AOM values larger than 0.9 for GBB, MTBB-Initial, and MTBB-Final, respectively. The improvement by the MTBB-Final method was statistically significant for all the evaluation measures by the Wilcoxon signed rank test (p<0.0001). Conclusions: The MTBB approach that combined dynamic multiple thresholding and gradient information provided better performance than the breast boundary
Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus
Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.
Yang, Yikun; Sun, Lin; Zhu, Jinshan; Wei, Jing; Su, Qinghua; Sun, Wenxiao; Liu, Fangwei; Shu, Meiyan
Due to the complex characteristics of dust and sparse ground-based monitoring stations, dust monitoring is facing severe challenges, especially in dust storm-prone areas. Aim at constructing a high-precision dust storm detection model, a pixel database, consisted of dusts over a variety of typical feature types such as cloud, vegetation, Gobi and ice/snow, was constructed, and their distributions of reflectance and Brightness Temperatures (BT) were analysed, based on which, a new Simplified Dust Detection Algorithm (SDDA) for the Suomi National Polar-Orbiting Partnership Visible infrared Imaging Radiometer (NPP VIIRS) is proposed. NPP VIIRS images covering the northern China and Mongolian regions, where features serious dust storms, were selected to perform the dust detection experiments. The monitoring results were compared with the true colour composite images, and results showed that most of the dust areas can be accurately detected, except for fragmented thin dusts over bright surfaces. The dust ground-based measurements obtained from the Meteorological Information Comprehensive Analysis and Process System (MICAPS) and the Ozone Monitoring Instrument Aerosol Index (OMI AI) products were selected for comparison purposes. Results showed that the dust monitoring results agreed well in the spatial distribution with OMI AI dust products and the MICAPS ground-measured data with an average high accuracy of 83.10%. The SDDA is relatively robust and can realize automatic monitoring for dust storms.
Dheeba, J.; Selvi, Tamil
Microcalcification clusters in mammograms is the significant early sign of breast cancer. Individual clusters are difficult to detect and hence an automatic computer aided mechanism will help the radiologist in detecting the microcalcification clusters in an easy and efficient way. This paper presents a new classification approach for detection of microcalcification in digital mammogram using particle swarm optimization algorithm (PSO) based clustering technique. Fuzzy C-means clustering technique, well defined for clustering data sets are used in combination with the PSO. We adopt the particle swarm optimization to search the cluster center in the arbitrary data set automatically. PSO can search the best solution from the probability option of the Social-only model and Cognition-only model. This method is quite simple and valid, and it can avoid the minimum local value. The proposed classification approach is applied to a database of 322 dense mammographic images, originating from the MIAS database. Results shows that the proposed PSO-FCM approach gives better detection performance compared to conventional approaches.
Full Text Available The problem of automatical detection of earthquake signals in seismograms
and definition of first arrivals of p and s waves is considered.
The algorithm is based on the analysis of t(A function which represents
the time of first appearence of a number of going one after another
swings of amplitudes greather than A in seismic signals. It allows to explore
such common features of seismograms of earthquakes as sudden
first p-arrivals of amplitude greater than general amplitude of noise and
after the definite interval of time before s-arrival the amplitude of which
overcomes the amplitude of p-arrival. The method was applied to
3-channel recods of Friuli aftershocks, ¿'-arrivals were defined correctly
in all cases; p-arrivals were defined in most cases using strict criteria of
detection. Any false signals were not detected. All p-arrivals were defined
using soft criteria of detection but less reliability and two false events
Silva Peixoto de Aboim Chaves, H.M. da; Pauwelussen, J.J.A.; Mulder, M.; Paassen, M.M. van; Happee, R.; Mulder, M.
Drivers usually maintain an error-neglecting control strategy (passive phase) in keeping their vehicle on the road, only to change to an error-correcting approach (active phase) when the vehicle state becomes inadequate. We developed an algorithm that is capable of detecting whether the driver is
Zhang Yongshun; Jia Xin; Song Ge
The existing direct sequence spread spectrum (DSSS) communications interference detection algorithms are confined to the high sampling rate. In order to solve this problem, algorithm for DSSS communications interference detection was designed based on compressive sensing (CS). First of all, the orthogonal matching pursuit (OMP) algorithm was applied to the interference detection in DSSS communications, the advantages and weaknesses of the algorithm were analyzed; Secondly, according to the we...
Bobo, William V; Pathak, Jyotishman; Kremers, Hilal Maradit; Yawn, Barbara P; Brue, Scott M; Stoppel, Cynthia J; Croarkin, Paul E; St Sauver, Jennifer; Frye, Mark A; Rocca, Walter A
We validated an algorithm designed to identify new or prevalent users of antidepressant medications via population-based drug prescription records. We obtained population-based drug prescription records for the entire Olmsted County, Minnesota, population from 2011 to 2012 (N=149,629) using the existing electronic medical records linkage infrastructure of the Rochester Epidemiology Project (REP). We selected electronically a random sample of 200 new antidepressant users stratified by age and sex. The algorithm required the exclusion of antidepressant use in the 6 months preceding the date of the first qualifying antidepressant prescription (index date). Medical records were manually reviewed and adjudicated to calculate the positive predictive value (PPV). We also manually reviewed the records of a random sample of 200 antihistamine users who did not meet the case definition of new antidepressant user to estimate the negative predictive value (NPV). 161 of the 198 subjects electronically identified as new antidepressant users were confirmed by manual record review (PPV 81.3%). Restricting the definition of new users to subjects who were prescribed typical starting doses of each agent for treating major depression in non-geriatric adults resulted in an increase in the PPV (90.9%). Extending the time windows with no antidepressant use preceding the index date resulted in only modest increases in PPV. The manual abstraction of medical records of 200 antihistamine users yielded an NPV of 98.5%. Our study confirms that REP prescription records can be used to identify prevalent and incident users of antidepressants in the Olmsted County, Minnesota, population. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Hu, Fang; Wang, Mingzhu; Wang, Yanran; Hong, Zhehao; Zhu, Yanhui
Currently, community detection in complex networks has become a hot-button topic. In this paper, based on the Spectral Clustering (SC) algorithm, we introduce the idea of Jacobi iteration, and then propose a novel algorithm J-SC for community detection in complex networks. Furthermore, the accuracy and efficiency of this algorithm are tested by some representative real-world networks and several computer-generated networks. The experimental results indicate that the J-SC algorithm can accurately and effectively detect the community structure in these networks. Meanwhile, compared with the state-of-the-art community detecting algorithms SC, SOM, K-means, Walktrap and Fastgreedy, the J-SC algorithm has better performance, reflecting that this new algorithm can acquire higher values of modularity and NMI. Moreover, this new algorithm has faster running time than SOM and Walktrap algorithms.
Clark, G A; Robbins, C L; Wade, K A; Souza, P R
This report describes the hardware system and the set of algorithms we have developed for detecting damage in cables for the Advanced Development and Process Technologies (ADAPT) Program. This program is part of the W80 Life Extension Program (LEP). The system could be generalized for application to other systems in the future. Critical cables can undergo various types of damage (e.g. short circuits, open circuits, punctures, compression) that manifest as changes in the dielectric/impedance properties of the cables. For our specific problem, only one end of the cable is accessible, and no exemplars of actual damage are available. This work addresses the detection of dielectric/impedance anomalies in transient time domain reflectometry (TDR) measurements on the cables. The approach is to interrogate the cable using time domain reflectometry (TDR) techniques, in which a known pulse is inserted into the cable, and reflections from the cable are measured. The key operating principle is that any important cable damage will manifest itself as an electrical impedance discontinuity that can be measured in the TDR response signal. Machine learning classification algorithms are effectively eliminated from consideration, because only a small number of cables is available for testing; so a sufficient sample size is not attainable. Nonetheless, a key requirement is to achieve very high probability of detection and very low probability of false alarm. The approach is to compare TDR signals from possibly damaged cables to signals or an empirical model derived from reference cables that are known to be undamaged. This requires that the TDR signals are reasonably repeatable from test to test on the same cable, and from cable to cable. Empirical studies show that the repeatability issue is the 'long pole in the tent' for damage detection, because it is has been difficult to achieve reasonable repeatability. This one factor dominated the project. The two-step model
Celaya-Padilla, Jose M; Galván-Tejada, Carlos E; López-Monteagudo, F E; Alonso-González, O; Moreno-Báez, Arturo; Martínez-Torteya, Antonio; Galván-Tejada, Jorge I; Arceo-Olague, Jose G; Luna-García, Huizilopoztli; Gamboa-Rosales, Hamurabi
Among the current challenges of the Smart City, traffic management and maintenance are of utmost importance. Road surface monitoring is currently performed by humans, but the road surface condition is one of the main indicators of road quality, and it may drastically affect fuel consumption and the safety of both drivers and pedestrians. Abnormalities in the road, such as manholes and potholes, can cause accidents when not identified by the drivers. Furthermore, human-induced abnormalities, such as speed bumps, could also cause accidents. In addition, while said obstacles ought to be signalized according to specific road regulation, they are not always correctly labeled. Therefore, we developed a novel method for the detection of road abnormalities (i.e., speed bumps). This method makes use of a gyro, an accelerometer, and a GPS sensor mounted in a car. After having the vehicle cruise through several streets, data is retrieved from the sensors. Then, using a cross-validation strategy, a genetic algorithm is used to find a logistic model that accurately detects road abnormalities. The proposed model had an accuracy of 0.9714 in a blind evaluation, with a false positive rate smaller than 0.018, and an area under the receiver operating characteristic curve of 0.9784. This methodology has the potential to detect speed bumps in quasi real-time conditions, and can be used to construct a real-time surface monitoring system.
Jose M. Celaya-Padilla
Full Text Available Among the current challenges of the Smart City, traffic management and maintenance are of utmost importance. Road surface monitoring is currently performed by humans, but the road surface condition is one of the main indicators of road quality, and it may drastically affect fuel consumption and the safety of both drivers and pedestrians. Abnormalities in the road, such as manholes and potholes, can cause accidents when not identified by the drivers. Furthermore, human-induced abnormalities, such as speed bumps, could also cause accidents. In addition, while said obstacles ought to be signalized according to specific road regulation, they are not always correctly labeled. Therefore, we developed a novel method for the detection of road abnormalities (i.e., speed bumps. This method makes use of a gyro, an accelerometer, and a GPS sensor mounted in a car. After having the vehicle cruise through several streets, data is retrieved from the sensors. Then, using a cross-validation strategy, a genetic algorithm is used to find a logistic model that accurately detects road abnormalities. The proposed model had an accuracy of 0.9714 in a blind evaluation, with a false positive rate smaller than 0.018, and an area under the receiver operating characteristic curve of 0.9784. This methodology has the potential to detect speed bumps in quasi real-time conditions, and can be used to construct a real-time surface monitoring system.
Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.
The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods
Ganziy, Denis; Jespersen, O.; Rose, B.
We propose a novel dynamic gate algorithm (DGA) for robust and fast peak detection. The algorithm uses a threshold determined detection window and center of gravity algorithm with bias compensation. Our experiment demonstrates that the DGA method is fast and robust with better stability and accur...
Zhang, Shou-ping; Xin, Xiao-kang
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
Inamdar, Munish V.; Lastoskie, Christian M.; Fierke, Carol A.; Sastry, Ann Marie
We present a mobile trap algorithm to sense zinc ions using protein-based sensors such as carbonic anhydrase (CA). Zinc is an essential biometal required for mammalian cellular functions although its intracellular concentration is reported to be very low. Protein-based sensors like CA molecules are employed to sense rare species like zinc ions. In this study, the zinc ions are mobile targets, which are sought by the mobile traps in the form of sensors. Particle motions are modeled using random walk along with the first passage technique for efficient simulations. The association reaction between sensors and ions is incorporated using a probability (p1) upon an ion-sensor collision. The dissociation reaction of an ion-bound CA molecule is modeled using a second, independent probability (p2). The results of the algorithm are verified against the traditional simulation techniques (e.g., Gillespie's algorithm). This study demonstrates that individual sensor molecules can be characterized using the probability pair (p1,p2), which, in turn, is linked to the system level chemical kinetic constants, kon and koff. Further investigations of CA-Zn reaction using the mobile trap algorithm show that when the diffusivity of zinc ions approaches that of sensor molecules, the reaction data obtained using the static trap assumption differ from the reaction data obtained using the mobile trap formulation. This study also reveals similar behavior when the sensor molecule has higher dissociation constant. In both the cases, the reaction data obtained using the static trap formulation reach equilibrium at a higher number of complex molecules (ion-bound sensor molecules) compared to the reaction data from the mobile trap formulation. With practical limitations on the number sensors that can be inserted/expressed in a cell and stochastic nature of the intracellular ionic concentrations, fluorescence from the number of complex sensor molecules at equilibrium will be the measure of the
Yazan M. Alomari
Full Text Available Segmentation and counting of blood cells are considered as an important step that helps to extract features to diagnose some specific diseases like malaria or leukemia. The manual counting of white blood cells (WBCs and red blood cells (RBCs in microscopic images is an extremely tedious, time consuming, and inaccurate process. Automatic analysis will allow hematologist experts to perform faster and more accurately. The proposed method uses an iterative structured circle detection algorithm for the segmentation and counting of WBCs and RBCs. The separation of WBCs from RBCs was achieved by thresholding, and specific preprocessing steps were developed for each cell type. Counting was performed for each image using the proposed method based on modified circle detection, which automatically counted the cells. Several modifications were made to the basic (RCD algorithm to solve the initialization problem, detecting irregular circles (cells, selecting the optimal circle from the candidate circles, determining the number of iterations in a fully dynamic way to enhance algorithm detection, and running time. The validation method used to determine segmentation accuracy was a quantitative analysis that included Precision, Recall, and F-measurement tests. The average accuracy of the proposed method was 95.3% for RBCs and 98.4% for WBCs.
Orman, Günce Keziban; Labatut, Vincent; Cherifi, Hocine
Community detection is one of the most active fields in complex network analysis, due to its potential value in practical applications. Many works inspired by different paradigms are devoted to the development of algorithmic solutions allowing the network structure in such cohesive subgroups to be revealed. Comparative studies reported in the literature usually rely on a performance measure considering the community structure as a partition (Rand index, normalized mutual information, etc). However, this type of comparison neglects the topological properties of the communities. In this paper, we present a comprehensive comparative study of a representative set of community detection methods, in which we adopt both types of evaluation. Community-oriented topological measures are used to qualify the communities and evaluate their deviation from the reference structure. In order to mimic real-world systems, we use artificially generated realistic networks. It turns out there is no equivalence between the two approaches: a high performance does not necessarily correspond to correct topological properties, and vice versa. They can therefore be considered as complementary, and we recommend applying both of them in order to perform a complete and accurate assessment. (paper)
Myocardial Infarction (MI) is one of the most frequent diseases, and can also cause demise, disability and monetary loss in patients who suffer from cardiovascular disorder. Diagnostic methods of this ailment by physicians are typically invasive, even though they do not fulfill the required detection accuracy. Recent feature extraction methods, for example, Auto Regressive (AR) modelling; Magnitude Squared Coherence (MSC); Wavelet Coherence (WTC) using Physionet database, yielded a collection of huge feature set. A large number of these features may be inconsequential containing some excess and non-discriminative components that present excess burden in computation and loss of execution performance. So Hybrid Firefly and Particle Swarm Optimization (FFPSO) is directly used to optimise the raw ECG signal instead of extracting features using the above feature extraction techniques. Provided results in this paper show that, for the detection of MI class, the FFPSO algorithm with ANN gives 99.3% accuracy, sensitivity of 99.97%, and specificity of 98.7% on MIT-BIH database by including NSR database also. The proposed approach has shown that methods that are based on the feature optimization of the ECG signals are the perfect to diagnosis the condition of the heart patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang Haidong; Li Yuanjing; Yang Yigang; Li Tiezhu; Chen Boxian; Cheng Jianping
This paper mainly describe some auto-analysis algorithms in explosive detection system with TNA method. These include the auto-calibration algorithm when disturbed by other factors, MCA auto-calibration algorithm with calibrated spectrum, the auto-fitting and integral of hydrogen and nitrogen elements data. With these numerical algorithms, the authors can automatically and precisely analysis the gamma-spectra and ultimately achieve the explosive auto-detection. (authors)
Full Text Available The existing direct sequence spread spectrum (DSSS communications interference detection algorithms are confined to the high sampling rate. In order to solve this problem, algorithm for DSSS communications interference detection was designed based on compressive sensing (CS. First of all, the orthogonal matching pursuit (OMP algorithm was applied to the interference detection in DSSS communications, the advantages and weaknesses of the algorithm were analyzed; Secondly, according to the weaknesses of the OMP algorithm, a joint interference detection method based on the OMP algorithm and cell average constant false alarm rate (CA-CFAR was proposed. The theoretical analyze and computer simulation all proved the effectiveness of the new algorithm. The simulation results show that the new method not only could achieve the interference detection, but also could estimate the interference quantity effectively.
Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Barker, Glover D.
The Enhanced Traffic Situational Awareness on the Airport Surface with Indications and Alerts (SURF IA) algorithm was evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. SURF IA is designed to increase flight crew situation awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the SURF IA algorithm under various runway scenarios, multiple levels of conflict detection and resolution (CD&R) system equipage, and various levels of horizontal position accuracy. This paper gives an overview of the SURF IA concept, simulation study, and results. Runway incursions are a serious aviation safety hazard. As such, the FAA is committed to reducing the severity, number, and rate of runway incursions by implementing a combination of guidance, education, outreach, training, technology, infrastructure, and risk identification and mitigation initiatives . Progress has been made in reducing the number of serious incursions - from a high of 67 in Fiscal Year (FY) 2000 to 6 in FY2010. However, the rate of all incursions has risen steadily over recent years - from a rate of 12.3 incursions per million operations in FY2005 to a rate of 18.9 incursions per million operations in FY2010 [1, 2]. The National Transportation Safety Board (NTSB) also considers runway incursions to be a serious aviation safety hazard, listing runway incursion prevention as one of their most wanted transportation safety improvements . The NTSB recommends that immediate warning of probable collisions/incursions be given directly to flight crews in the cockpit .
Piermarocchi, Stefano; Bini, Silvia; Martini, Ferdinando; Berton, Marianna; Lavini, Anna; Gusson, Elena; Marchini, Giorgio; Padovani, Ezio Maria; Macor, Sara; Pignatto, Silvia; Lanzetta, Paolo; Cattarossi, Luigi; Baraldi, Eugenio; Lago, Paola
To evaluate sensitivity, specificity and the safest cut-offs of three predictive algorithms (WINROP, ROPScore and CHOP ROP) for retinopathy of prematurity (ROP). A retrospective study was conducted in three centres from 2012 to 2014; 445 preterms with gestational age (GA) ≤ 30 weeks and/or birthweight (BW) ≤ 1500 g, and additional unstable cases, were included. No-ROP, mild and type 1 ROP were categorized. The algorithms were analysed for infants with all parameters (GA, BW, weight gain, oxygen therapy, blood transfusion) needed for calculation (399 babies). Retinopathy of prematurity (ROP) was identified in both eyes in 116 patients (26.1%), and 44 (9.9%) had type 1 ROP. Gestational age and BW were significantly lower in ROP group compared with no-ROP subjects (GA: 26.7 ± 2.2 and 30.2 ± 1.9, respectively, p < 0.0001; BW: 839.8 ± 287.0 and 1288.1 ± 321.5 g, respectively, p = 0.0016). Customized alarms of ROPScore and CHOP ROP correctly identified all infants having any ROP or type 1 ROP. WINROP missed 19 cases of ROP, including three type 1 ROP. ROPScore and CHOP ROP provided the best performances with an area under the receiver operating characteristic curve for the detection of severe ROP of 0.93 (95% CI, 0.90-0.96, and 95% CI, 0.89-0.96, respectively), and WINROP obtained 0.83 (95% CI, 0.77-0.87). Median time from alarm to treatment was 11.1, 5.1 and 9.1 weeks, for WINROP, ROPScore and CHOP ROP, respectively. ROPScore and CHOP ROP showed 100% sensitivity to identify sight-threatening ROP. Predictive algorithms are a reliable tool for early identification of infants requiring referral to an ophthalmologist, for reorganizing resources and reducing stressful procedures to preterm babies. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
El Mountassir, M.; Yaacoubi, S.; Dahmene, F.
Novelty detection is a widely used algorithm in different fields of study due to its capabilities to recognize any kind of abnormalities in a specific process in order to ensure better working in normal conditions. In the context of Structural Health Monitoring (SHM), this method is utilized as damage detection technique because the presence of defects can be considered as abnormal to the structure. Nevertheless, the performance of such a method could be jeopardized if the structure is operating in harsh environmental and operational conditions (EOCs). In this paper, novelty detection statistical technique is used to investigate the detection of damages under various EOCs. Experiments were conducted with different scenarios: damage sizes and shapes. EOCs effects were simulated by adding stochastic noise to the collected experimental data. Different levels of noise were studied to determine the accuracy and the performance of the proposed method.
Mountassir, M El; Yaacoubi, S; Dahmene, F
Novelty detection is a widely used algorithm in different fields of study due to its capabilities to recognize any kind of abnormalities in a specific process in order to ensure better working in normal conditions. In the context of Structural Health Monitoring (SHM), this method is utilized as damage detection technique because the presence of defects can be considered as abnormal to the structure. Nevertheless, the performance of such a method could be jeopardized if the structure is operating in harsh environmental and operational conditions (EOCs). In this paper, novelty detection statistical technique is used to investigate the detection of damages under various EOCs. Experiments were conducted with different scenarios: damage sizes and shapes. EOCs effects were simulated by adding stochastic noise to the collected experimental data. Different levels of noise were studied to determine the accuracy and the performance of the proposed method. (paper)
ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter... Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure by Kwok F Tom Sensors and Electron Devices...September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure 5a
ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a
Full Text Available This is the description of the whistlers automatic detection algorithm, based on the nonlinear transformation of the spectrogram VLF signal. In the converted spectrogram the whistler graphic is presented by a straight line, detection of which is algorithmically simple task. The testing of the program implementation of the algorithm showed that a detection can be managed in the real-time mode.
Gui, Chun; Zhang, Ruisheng; Zhao, Zhili; Wei, Jiaxuan; Hu, Rongjing
In order to deal with stochasticity in center node selection and instability in community detection of label propagation algorithm, this paper proposes an improved label propagation algorithm named label propagation algorithm based on community belonging degree (LPA-CBD) that employs community belonging degree to determine the number and the center of community. The general process of LPA-CBD is that the initial community is identified by the nodes with the maximum degree, and then it is optimized or expanded by community belonging degree. After getting the rough structure of network community, the remaining nodes are labeled by using label propagation algorithm. The experimental results on 10 real-world networks and three synthetic networks show that LPA-CBD achieves reasonable community number, better algorithm accuracy and higher modularity compared with other four prominent algorithms. Moreover, the proposed algorithm not only has lower algorithm complexity and higher community detection quality, but also improves the stability of the original label propagation algorithm.
National Aeronautics and Space Administration — This paper considers the problem of change detection using local distributed eigen monitoring algorithms for next generation of astronomy petascale data pipelines...
can be used for decision making and policy planning purposes. In particular, previous change detection studies have primarily relied on examining differences between two or more satellite images acquired on different dates. Thus, a technological solution that detects global land cover change using high temporal resolution time series data will represent a paradigm-shift in the field of land cover change studies. To realize these ambitious goals, a number of computational challenges in spatio-temporal data mining need to be addressed. Specifically, analysis and discovery approaches need to be cognizant of climate and ecosystem data characteristics such as seasonality, non-stationarity/inter-region variability, multi-scale nature, spatio-temporal autocorrelation, high-dimensionality and massive data size. This dissertation, a step in that direction, translates earth science challenges to computer science problems, and provides computational solutions to address these problems. In particular, three key technical capabilities are developed: (1) Algorithms for time series change detection that are effective and can scale up to handle the large size of earth science data; (2) Change detection algorithms that can handle large numbers of missing and noisy values present in satellite data sets; and (3) Spatio-temporal analysis techniques to identify the scale and scope of disturbance events.
Willersrud, Anders; Imsland, Lars; Blanke, Mogens
Downhole incidents such as kick, lost circulation, pack-off, and hole cleaning issues are important contributors to downtime in drilling. In managed pressure drilling (MPD), operations margins are typically narrower, implying more frequent incidents and more severe consequences. Detection and han...
Full Text Available Community structure is one of the most fundamental and important topology characteristics of complex networks. The research on community structure has wide applications and is very important for analyzing the topology structure, understanding the functions, finding the hidden properties, and forecasting the time-varying of the networks. This paper analyzes some related algorithms and proposes a new algorithm—CN agglomerative algorithm based on graph theory and the local connectedness of network to find communities in network. We show this algorithm is distributed and polynomial; meanwhile the simulations show it is accurate and fine-grained. Furthermore, we modify this algorithm to get one modified CN algorithm and apply it to dynamic complex networks, and the simulations also verify that the modified CN algorithm has high accuracy too.
Wang, Xingyuan; Qin, Xiaomeng
In this paper, an algorithm to choose a good partition in bipartite networks has been proposed. Bipartite networks have more theoretical significance and broader prospect of application. In view of distinctive structure of bipartite networks, in our method, two parameters are defined to show the relationships between the same type nodes and heterogeneous nodes respectively. Moreover, our algorithm employs a new method of finding and expanding the core communities in bipartite networks. Two kinds of nodes are handled separately and merged, and then the sub-communities are obtained. After that, objective communities will be found according to the merging rule. The proposed algorithm has been simulated in real-world networks and artificial networks, and the result verifies the accuracy and reliability of the parameters on intimacy for our algorithm. Eventually, comparisons with similar algorithms depict that the proposed algorithm has better performance.
Noort, M. van; Klunder, G.
Current vehicle detection on motorways is based generally on either inductive loop systems or various alternatives such as video cameras. Recently, we encountered two new developments that take a different approach: one from The Netherlands using microwave sensors, and the other from Sweden using
Wei, Yanbo; Lu, Zhizhong; Yuan, Gannan; Fang, Zhao; Huang, Yu
In this paper, the application of the emerging compressed sensing (CS) theory and the geometric characteristics of the targets in radar images are investigated. Currently, the signal detection algorithms based on the CS theory require knowing the prior knowledge of the sparsity of target signals. However, in practice, it is often impossible to know the sparsity in advance. To solve this problem, a novel sparsity adaptive matching pursuit (SAMP) detection algorithm is proposed. This algorithm executes the detection task by updating the support set and gradually increasing the sparsity to approximate the original signal. To verify the effectiveness of the proposed algorithm, the data collected in 2010 at Pingtan, which located on the coast of the East China Sea, were applied. Experiment results illustrate that the proposed method adaptively completes the detection task without knowing the signal sparsity, and the similar detection performance is close to the matching pursuit (MP) and orthogonal matching pursuit (OMP) detection algorithms.
Rankin, Arturo; Huertas, Andres; Matthies, Larry
Reliable detection of non-traversable hazards is a key requirement for off-road autonomous navigation. A detailed description of each obstacle detection algorithm and their performance on the surveyed obstacle course is presented in this paper.
Hibert, C.; Michéa, D.; Provost, F.; Malet, J. P.; Geertsema, M.
Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with small mass. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest machine learning algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. The processing chain is implemented to work in a High Performance Computers centre which permits to explore years of continuous seismic data rapidly. We present here the preliminary results of the application of this processing chain for years
Testen, A.M.; Backman, Paul A.
This poster describes the research undertaken to determine the level of imported quinoa contamination with quinoa downy mildew, caused by Pernospora variabilis, as well as to develop a rapid method of detection by DNA primers. The majority of lots coming from a wide variety of sources were found to have been contaminated with the pathogen, indicating it is more widespread than anticipated. Additionally, DNA primers for P. variabilis were shown to be effective in identifying most cases of cont...
Child, C. H. T.; Stathis, K.
Apriori Stochastic Dependency Detection (ASDD) is an algorithm for fast induction of stochastic logic rules from a database of observations made by an agent situated in an environment. ASDD is based on features of the Apriori algorithm for mining association rules in large databases of sales transactions  and the MSDD algorithm for discovering stochastic dependencies in multiple streams of data . Once these rules have been acquired the Precedence algorithm assigns operator precedence w...
Full Text Available We present a modified delay-coordinate mapping-based QRS complex detection algorithm, suitable for hardware implementation. In the original algorithm, the phase-space portrait of an electrocardiogram signal is reconstructed in a two-dimensional plane using the method of delays. Geometrical properties of the obtained phase-space portrait are exploited for QRS complex detection. In our solution, a bandpass filter is used for ECG signal prefiltering and an improved method for detection threshold-level calculation is utilized. We developed the algorithm on the MIT-BIH Arrhythmia Database (sensitivity of 99.82% and positive predictivity of 99.82% and tested it on the long-term ST database (sensitivity of 99.72% and positive predictivity of 99.37%. Our algorithm outperforms several well-known QRS complex detection algorithms, including the original algorithm.
Full Text Available In this paper, by analyzing the characteristics of infrared moving targets, a Symmetric Frame Differencing Target Detection algorithm based on local clustering segmentation is proposed. In consideration of the high real-time performance and accuracy of traditional symmetric differencing, this novel algorithm uses local grayscale clustering to accomplish target detection after carrying out symmetric frame differencing to locate the regions of change. In addition, the mean shift tracking algorithm is also improved to solve the problem of missed targets caused by error convergence. As a result, a kernel-based mean shift target tracking algorithm based on detection updates is also proposed. This tracking algorithm makes use of the interaction between detection and tracking to correct the tracking errors in real time and to realize robust target tracking in complex scenes. In addition, the validity, robustness and stability of the proposed algorithms are all verified by experiments on mid-infrared aerial sequences with vehicles as targets.
Claassen, J.P.; Patterson, M.M.
Some intrusion detection systems are susceptible to nonstationary noise resulting in frequent nuisance alarms and poor detection when the noise is present. Adaptive inverse filtering for single channel systems and adaptive noise cancellation for two channel systems have both demonstrated good potential in removing correlated noise components prior detection. For such noise susceptible systems the suitability of a noise reduction algorithm must be established in a trade-off study weighing algorithm complexity against performance. The performance characteristics of several distinct classes of algorithms are established through comparative computer studies using real signals. The relative merits of the different algorithms are discussed in the light of the nature of intruder and noise signals
Hirakawa, Satoshi; Nishio, Yoshifumi; Ushida, Akio; Ueno, Junji; Kasem, I.; Nishitani, Hiromu; Rekeczky, C.; Roska, T.
In this article, a new type of diffusion template and an analogic CNN algorithm using this diffusion template for detecting some lung cancer symptoms in X-ray films are proposed. The performance of the diffusion template is investigated and our CNN algorithm is verified to detect some key lung cancer symptoms, successfully. (author)
Ajit Thachil, MD, DM, CCDS
Full Text Available Tachycardia detection and therapy algorithms in Implantable Cardioverter-Defibrillators (ICD reduce, but do not eliminate inappropriate ICD shocks. Awareness of the pros and cons of a particular algorithm helps to predict its utility in specific situations. We report a case where PR logic™, an algorithm commonly used in currently implanted ICDs to differentiate supraventricular tachycardia (SVT from ventricular tachycardia resulted in inappropriate detection and shock for an SVT, and discuss several solutions to the problem.
Liu, Yecai; Posey, Drew L.; Cetron, Martin S.; Painter, John A.
Background Before 2007, U.S.-bound immigrants and refugees were screened for tuberculosis (TB) by a smear-based algorithm that could not diagnose smear-negative and culture-positive TB. In 2007, the Centers for Disease Control and Prevention began to implement a culture-based algorithm. Objective To evaluate the effect of the culture-based algorithm on preventing the importation of TB to the United States by immigrants and refugees from foreign countries. Design Population-based, cross-sectional study. Setting Panel physician sites for overseas medical examination. Patients Immigrants and refugees with TB. Measurements Comparison of the increase of smear-negative and culture-positive TB cases diagnosed overseas among immigrants and refugees by the culture-based algorithm with the decline of reported TB cases among foreign-born persons within 1 year after arrival in the United States from 2007 to 2012. Results Of the 3 212 421 arrivals of immigrants and refugees from 2007 to 2012, 1 650 961 (51.4%) were screened by the smear-based algorithm and 1 561 460 (48.6%) were screened by the culture-based algorithm. Among the 4032 TB cases diagnosed by the culture-based algorithm, 2195 (54.4%) were smear-negative and culture-positive. Before implementation (2002 to 2006), the annual number of reported TB cases among foreign-born persons within 1 year after arrival was relatively constant (range, 1424 to 1626 cases; mean, 1504 cases) but decreased from 1511 to 940 cases during implementation (2007 to 2012). During the same period, the annual number of smear-negative and culture-positive TB cases diagnosed overseas among U.S.-bound immigrants and refugees by the culture-based algorithm increased from 4 in 2007 to 629 in 2012. Limitation This analysis did not control for the decline in new arrivals of nonimmigrant visitors to the United States and the decrease of incidence of TB in their countries of origin. Conclusion Implementation of the culture-based algorithm in U
Liu, Pengyu; Jia, Kebin
A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding.
Full Text Available A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI. The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding.
Liu, Xingmiao; Wang, Shicheng; Zhao, Jing
The methods of detecting small moving targets in infrared image sequences that contain moving nuisance objects and background noise is analyzed in this paper. A novel infrared small target detection algorithm based on improved SUSAN operator is put forward. The algorithm selects double templates for the infrared small target detection: one size is greater than the small target point size and another size is equal to the small target point size. First, the algorithm uses the big template to calculate the USAN of each pixel in the image and detect the small target, the edge of the image and isolated noise pixels; Then the algorithm uses the another template to calculate the USAN of pixels detected in the first step and improves the principles of SUSAN algorithm based on the characteristics of the small target so that the algorithm can only detect small targets and don't sensitive to the edge pixels of the image and isolated noise pixels. So the interference of the edge of the image and isolate noise points are removed and the candidate target points can be identified; At last, the target is detected by utilizing the continuity and consistency of target movement. The experimental results indicate that the improved SUSAN detection algorithm can quickly and effectively detect the infrared small targets.
We exhibit an algorithm to determine the bridge number of a hyperbolic knot in the 3-sphere. The proof uses adaptations of almost normal surface theory for compact surfaces with boundary in ideally triangulated knot exteriors.
Full Text Available their results on a given set of input images. The evaluation will be preformed by comparing the calibration data, the fundamental matrix and the rotation and translation errors extracted from each algorithm with ground truth data....
Chierici, F.; Embriaco, D.; Morucci, S.
Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.
Thanos, Konstantinos-Georgios; Skroumpelou, Katerina; Rizogiannis, Konstantinos; Kyriazanos, Dimitris M.; Astyakopoulos, Alkiviadis; Thomopoulos, Stelios C. A.
In this paper a solution is presented aiming to assist the early detection and localization of a fire incident by exploiting crowdsourcing and unofficial civilian online reports. It consists of two components: (a) the potential fire incident detection and (b) the visualization component. The first component comprises two modules that run in parallel and aim to collect reports posted on public platforms and conclude to potential fire incident locations. It collects the public reports, distinguishes reports that refer to a potential fire incident and store the corresponding information in a structured way. The second module aggregates all these stored reports and conclude to a probable fire location, based on the amount of reports per area, the time and location of these reports. In further the result is entered to a fusion module which combines it with information collected by sensors if available in order to provide a more accurate fire event detection capability. The visualization component is a fully - operational public information channel which provides accurate and up-to-date information about active and past fires, raises awareness about forest fires and the relevant hazards among citizens. The channel has visualization capabilities for presenting in an efficient way information regarding detected fire incidents fire expansion areas, and relevant information such as detecting sensors and reporting origin. The paper concludes with insight to current CONOPS end user with regards to the inclusion of the proposed solution to the current CONOPS of fire detection.
Heinstein, M.W.; Attaway, S.W.; Swegle, J.W.; Mello, F.J.
A new contact detection algorithm has been developed to address difficulties associated with the numerical simulation of contact in nonlinear finite element structural analysis codes. Problems including accurate and efficient detection of contact for self-contacting surfaces, tearing and eroding surfaces, and multi-body impact are addressed. The proposed algorithm is portable between dynamic and quasi-static codes and can efficiently model contact between a variety of finite element types including shells, bricks, beams and particles. The algorithm is composed of (1) a location strategy that uses a global search to decide which slave nodes are in proximity to a master surface and (2) an accurate detailed contact check that uses the projected motions of both master surface and slave node. In this report, currently used contact detection algorithms and their associated difficulties are discussed. Then the proposed algorithm and how it addresses these problems is described. Finally, the capability of the new algorithm is illustrated with several example problems.
Chen, Xiao-Yun; Zhan, Yan-Yan
In this paper, we propose two anomaly detection algorithms PAV and MPAV on time series. The first basic idea of this paper defines that the anomaly pattern is the most infrequent time series pattern, which is the lowest support pattern. The second basic idea of this paper is that PAV detects directly anomalies in the original time series, and MPAV algorithm extraction anomaly in the wavelet approximation coefficient of the time series. For complexity analyses, as the wavelet transform have the functions to compress data, filter noise, and maintain the basic form of time series, the MPAV algorithm, while maintaining the accuracy of the algorithm improves the efficiency. As PAV and MPAV algorithms are simple and easy to realize without training, this proposed multi-scale anomaly detection algorithm based on infrequent pattern of time series can therefore be proved to be very useful for computer science applications.
Christin, C.; Smilde, A.K.; Hoefsloot, H.C.J.; Suits, F.; Bischoff, R.; Horvatovich, P.L.
Correlation optimized warping (COW) based on the total ion current (TIC) is a widely used time alignment algorithm (COW-TIC). This approach works successfully on chromatograms containing few compounds and having a well-defined TIC. In this paper, we have combined COW with a component detection
Christin, Christin; Smilde, Age K.; Hoefsloot, Huub C. J.; Suits, Frank; Bischoff, Rainer; Horvatovich, Peter L.
Correlation optimized warping (COW) based on the total ion current (TIC) is a widely used time alignment algorithm (COW-TIC). This approach works successfully on chromatograms containing few compounds and having a well-defined TIC. In this paper, we have combined COW with a component detection
Full Text Available Spectrum Sensing is the first and fundamental function of Cognitive Cycle which plays a vital role in the success of CRs (Cognitive Radios. Spectrum Sensing indicate the presence and absence of PUs (Primary Users in RF (Radio Frequency spectrum occupancy measurements. In order to correctly determine the presence and absence of Primary Users, the algorithms in practice include complex mathematics which increases the computational complexity of the algorithm, thus shifted the CRs to operate as ?green? communication systems. In this paper, an energy efficient and computationally less complex, energy detection based Spectrum Sensing algorithm have been proposed. The design goals of the proposed algorithm are to save the processing and sensing energies. At first, by using less MAC (Multiply and Accumulate operation, it saves the processing energy needed to determine the presence and absence of PUs. Secondly, it saves the sensing energy by providing a way to find lowest possible sensing time at which spectrum is to be sensed. Two scenarios have been defined for testing the proposed algorithm i.e. simulate detection capability of Primary Users in ideal and noisy scenarios. Detection of PUs in both of these scenarios have been compared to obtain the probability of detection. Energy Efficiency of the proposed algorithm has been proved by making performance comparison between the proposed (less complex algorithm and the legacy energy detection algorithm. With reduced complexity, the proposed spectrum sensing algorithm can be considered under the paradigm of Green Cognitive Radio Communication
Full Text Available With the increasing use of mobile GPS (global positioning system devices, a large volume of trajectory data on users can be produced. In most existing work, trajectories are usually divided into a set of stops and moves. In trajectories, stops represent the most important and meaningful part of the trajectory; there are many data mining methods to extract these locations. DBSCAN (density-based spatial clustering of applications with noise is a classical density-based algorithm used to find the high-density areas in space, and different derivative methods of this algorithm have been proposed to find the stops in trajectories. However, most of these methods required a manually-set threshold, such as the speed threshold, for each feature variable. In our research, we first defined our new concept of move ability. Second, by introducing the theory of data fields and by taking our new concept of move ability into consideration, we constructed a new, comprehensive, hybrid feature–based, density measurement method which considers temporal and spatial properties. Finally, an improved DBSCAN algorithm was proposed using our new density measurement method. In the Experimental Section, the effectiveness and efficiency of our method is validated against real datasets. When comparing our algorithm with the classical density-based clustering algorithms, our experimental results show the efficiency of the proposed method.
Full Text Available Image saliency detection has become increasingly important with the development of intelligent identification and machine vision technology. This process is essential for many image processing algorithms such as image retrieval, image segmentation, image recognition, and adaptive image compression. We propose a salient region detection algorithm for full-resolution images. This algorithm analyzes the randomness and correlation of image pixels and pixel-to-region saliency computation mechanism. The algorithm first obtains points with more saliency probability by using the improved smallest univalue segment assimilating nucleus operator. It then reconstructs the entire saliency region detection by taking these points as reference and combining them with image spatial color distribution, as well as regional and global contrasts. The results for subjective and objective image saliency detection show that the proposed algorithm exhibits outstanding performance in terms of technology indices such as precision and recall rates.
ARL-TR-8269 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision...needed. Do not return it to the originator. ARL-TR-8269 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection...Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6
Tiedeu, A.; Kom, G.; Kom, M.
In this paper, we implement and carry out the comparison of two methods of computer-aided-detection of masses on mammograms. The two algorithms basically consist of 3 steps each: segmentation, binarization and noise suppression but using different techniques for each step. A database of 60 images was used to compare the performance of the two algorithms in terms of general detection efficiency, conservation of size and shape of detected masses. (author)
Hellassa Wassim; Boukari Karima
This paper presents a new algorithm for spatially multiplexed MIMO detection that exploits the resources available in the search procedure to enhance the error performance of the K-best approach of search. The proposed algorithm exploits the heuristics of the nodes and it is referred as the Heuristic K-best algorithm (HK-best). Unlike the conventional K-best algorithm, the HK-best algorithm sort the visited nodes in each layer based on the path metric and the heuristic. Simulation results sho...
Rezvan Almas Shehni
Full Text Available Wide-spread deployment of Wireless Sensor Networks (WSN necessitates special attention to security issues, amongst which Sybil attacks are the most important ones. As a core to Sybil attacks, malicious nodes try to disrupt network operations by creating several fabricated IDs. Due to energy consumption concerns in WSNs, devising detection algorithms which release the sensor nodes from high computational and communicational loads are of great importance. In this paper, a new computationally lightweight watchdog-based algorithm is proposed for detecting Sybil IDs in mobile WSNs. The proposed algorithm employs watchdog nodes for collecting detection information and a designated watchdog node for detection information processing and the final Sybil list generation. Benefiting from a newly devised co-presence state diagram and adequate detection rules, the new algorithm features low extra communication overhead, as well as a satisfactory compromise between two otherwise contradictory detection measures of performance, True Detection Rate (TDR and False Detection Rate (FDR. Extensive simulation results illustrate the merits of the new algorithm compared to a couple of recent watchdog-based Sybil detection algorithms.
Kaup, H J; Hexamer, M; Werner, J
To prevent sudden cardiac death of patients who are at risk from long standing tachyarrhythmia the implantable cardioverter defibrillator (ICD) is the first choice therapy. ICDs use a range of electrostimuli up to defibrillation, which is a non synchronous high energy shock, whereas cardioversion is synchronous with the ECG. In order to know when and how to react, a detection algorithm, which analyses an intracardial electrocardiogram (ECG) and classifies the heart rhythm, is implemented in every ICD. All detection algorithms use the heart rate to classify the different heart rhythms roughly. If a tachycardia is detected, it is important to discriminate between a ventricular tachycardia, which is life threatening and a supraventricular tachycardia, which is much less threatening. To be able to make this distinction the detection algorithms analyse the behaviour of the heart cycle intervals, the ECG-morphology or in addition to the ventricular ECG, an atrial ECG. In this paper morphological algorithms will be evaluated and newly developed algorithms will be presented. Recent algorithms use the mathematical wavelet theory. The evaluation shows that these get better results than all but one of the simpler classical morphological algorithms. A new wavelet based algorithm, developed by the authors, exhibits the best detection results.
have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.
Full Text Available This paper proposes a new algorithm (DA3DED for edge detection in 3D images. DA3DED is doubly adaptive because it is based on the adaptive algorithm EDAS-1 for detecting edges in functions of one variable and a second adaptive procedure based on the concept of projective complexity of a 3D image. DA3DED has been tested on 3D images that modelize real problems (composites and fractures. It has been much faster than the 1D edge detection algorithm for 3D images derived from EDAS-1.
Chady, T.; Caryk, M.
Effectiveness of flaws detection process using various algorithms of background generation and various algorithms of image thresholding was evaluated. The results of background generation using a median filter method, a polynomial approximation method and an iterative Gaussian approximation method were presented. The received background images were subtracted from the base image. After background subtraction process the global and local thresholding algorithms were applied. All analysis were carried out using digital radiographs of real welds
Full Text Available Divisive algorithms are widely used for community detection. A common strategy of divisive algorithms is to remove the external links which connect different communities so that communities get disconnected from each other. Divisive algorithms have been investigated for several decades but some challenges remain unsolved: (1 how to efficiently identify external links, (2 how to efficiently remove external links, and (3 how to end a divisive algorithm with no help of predefined parameters or community definitions. To overcome these challenges, we introduced a concept of the weak link and autonomous division. The implementation of the proposed divisive algorithm adopts a new link-break strategy similar to a tug-of-war contest, where communities act as contestants and weak links act as breakable ropes. Empirical evaluations on artificial and real-world networks show that the proposed algorithm achieves a better accuracy-efficiency trade-off than some of the latest divisive algorithms.
Beretta, Lorenzo; Santaniello, Alessandro
The discovery and the description of the genetic background of common human diseases is hampered by their complexity and dynamic behavior. Appropriate bioinformatic tools are needed to account all the facets of complex diseases and to this end we recently described the survival dimensionality reduction (SDR) algorithm in the effort to model gene-gene interactions in the context of survival analysis. When one event precludes the occurrence of another event under investigation in the 'competing risk model', survival algorithms require particular adjustment to avoid the risk of reporting wrong or biased conclusions. The SDR algorithm was modified to incorporate the cumulative incidence function as well as an adapted version of the Brier score for mutually exclusive outcomes, to better search for epistatic models in the competing risk setting. The applicability of the new SDR algorithm (SDR-CR) was evaluated using synthetic lifetime epistatic datasets with competing risks and on a dataset of scleroderma patients. The SDR-CR algorithms retains a satisfactory power to detect the causative variants in simulated datasets under different scenarios of sample size and degrees of type I or type II censoring. In the real-world dataset, SDR-CR was capable of detecting a significant interaction between the IL-1α C-889T and the IL-1β C-511T single-nucleotide polymorphisms to predict the occurrence of restrictive lung disease vs. isolated pulmonary hypertension. We provide an useful extension of the SDR algorithm to analyze epistatic interactions in the competing risk settings that may be of use to unveil the genetic background of complex human diseases. http://sourceforge.net/projects/sdrproject/files/. Copyright © 2012 Elsevier Inc. All rights reserved.
Lanying Lin; Sheng He; Feng Fu; Xiping Wang
Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...
Guoping Hou; Xuan Ma; Yuelei Zhang
Computer and network security has received and will still receive much attention. Any unexpected intrusion will damage the network. It is therefore imperative to detect the network intrusion to ensure the normal operation of the internet. There are many studies in the intrusion detection and intrusion patter recognition. The artificial neural network (ANN) has proven to be powerful for the intrusion detection. However, very little work has discussed the optimization of the input intrusion fea...
Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo
With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency.
Grilli, S. T.; Guérin, C. A.; Shelby, M. R.; Grilli, A. R.; Insua, T. L.; Moran, P., Jr.
A High-Frequency (HF) radar was installed by Ocean Networks Canada in Tofino, BC, to detect tsunamis from far- and near-field seismic sources; in particular, from the Cascadia Subduction Zone. This HF radar can measure ocean surface currents up to a 70-85 km range, depending on atmospheric conditions, based on the Doppler shift they cause in ocean waves at the Bragg frequency. In earlier work, we showed that tsunami currents must be at least 0.15 m/s to be directly detectable by a HF radar, when considering environmental noise and background currents (from tide/mesoscale circulation). This limits a direct tsunami detection to shallow water areas where currents are sufficiently strong due to wave shoaling and, hence, to the continental shelf. It follows that, in locations with a narrow shelf, warning times using a direct inversion method will be small. To detect tsunamis in deeper water, beyond the continental shelf, we proposed a new algorithm that does not require directly inverting currents, but instead is based on observing changes in patterns of spatial correlations of the raw radar signal between two radar cells located along the same wave ray, after time is shifted by the tsunami propagation time along the ray. A pattern change will indicate the presence of a tsunami. We validated this new algorithm for idealized tsunami wave trains propagating over a simple seafloor geometry in a direction normally incident to shore. Here, we further develop, extend, and validate the algorithm for realistic case studies of seismic tsunami sources impacting Vancouver Island, BC. Tsunami currents, computed with a state-of-the-art long wave model are spatially averaged over cells aligned along individual wave rays, located within the radar sweep area, obtained by solving the wave geometric optic equation; for long waves, such rays and tsunami propagation times along those are only function of the seafloor bathymetry, and hence can be precalculated for different incident tsunami
Saadi, Dorthe Bodholt; Egstrup, Kenneth; Branebjerg, Jens
We have designed and optimized an automatic QRS complex detection algorithm for electrocardiogram (ECG) signals recorded with the DELTA ePatch platform. The algorithm is able to automatically switch between single-channel and multi-channel analysis mode. This preliminary study includes data from 11...
Thompson, Mary Kathryn; Espensen, Christina; Clemmensen, Line Katrine Harder
This work characterizes and optimizes an outlier detection algorithm to identify potentially invalid scores produced by jury members while grading engineering design projects. The paper describes the original algorithm and the associated adjudication process in detail. The impact of the various...
Larsen, Simon; Alkærsig, Frederik G.; Ditzel, Henrik
introduce a heuristic algorithm for the multiple maximum common edge subgraph problem that is able to detect large common substructures shared across multiple, real-world size networks efficiently. Our algorithm uses a combination of iterated local search, simulated annealing and a pheromone...... apply it to unravel a biochemical backbone inherent in different species, modeled as multiple maximum common subgraphs....
Shoaib, M.; Scholten, Johan; Havinga, Paul J.M.; Durmaz, O.
Smoking is known to be one of the main causes for premature deaths. A reliable smoking detection method can enable applications for an insight into a user’s smoking behaviour and for use in smoking cessation programs. However, it is difficult to accurately detect smoking because it can be performed
Full Text Available Community detection in dynamic networks is an important research topic and has received an enormous amount of attention in recent years. Modularity is selected as a measure to quantify the quality of the community partition in previous detection methods. But, the modularity has been exposed to resolution limits. In this paper, we propose a novel multiobjective evolutionary algorithm for dynamic networks community detection based on the framework of nondominated sorting genetic algorithm. Modularity density which can address the limitations of modularity function is adopted to measure the snapshot cost, and normalized mutual information is selected to measure temporal cost, respectively. The characteristics knowledge of the problem is used in designing the genetic operators. Furthermore, a local search operator was designed, which can improve the effectiveness and efficiency of community detection. Experimental studies based on synthetic datasets show that the proposed algorithm can obtain better performance than the compared algorithms.
Full Text Available Because of the complex constraints in complex product assembly line, existing algorithms not always detect bottleneck correctly and they have a low convergence rate. In order to solve this problem, a hybrid algorithm of adjacency matrix and improved genetic algorithm (GA was proposed. First, complex assembly network model (CANM was defined based on operation capacity of each workstation. Second, adjacency matrix was proposed to convert bottleneck detection of complex assembly network (CAN into a combinatorial optimization problem of max-flow. Third, an improved GA was proposed to solve this max-flow problem by retaining the best chromosome. Finally, the min-cut sets of CAN were obtained after calculation, and bottleneck workstations were detected according to the analysis of min-cut sets. A case study shows that this algorithm can detect bottlenecks correctly and its convergence rate is high.
Full Text Available Night vision systems get more and more attention in the field of automotive active safety field. In this area, a number of researchers have proposed far-infrared sensor based night-time vehicle detection algorithm. However, existing algorithms have low performance in some indicators such as the detection rate and processing time. To solve this problem, we propose a far-infrared image vehicle detection algorithm based on visual saliency and deep learning. Firstly, most of the nonvehicle pixels will be removed with visual saliency computation. Then, vehicle candidate will be generated by using prior information such as camera parameters and vehicle size. Finally, classifier trained with deep belief networks will be applied to verify the candidates generated in last step. The proposed algorithm is tested in around 6000 images and achieves detection rate of 92.3% and processing time of 25 Hz which is better than existing methods.
Full Text Available Data mining is the extraction of hidden predictive information from large databases. This is a technology with potential to study and analyze useful information present in data. Data objects which do not usually fit into the general behavior of the data are termed as outliers. Outlier Detection in databases has numerous applications such as fraud detection, customized marketing, and the search for terrorism. By definition, outliers are rare occurrences and hence represent a small portion of the data. However, the use of Outlier Detection for various purposes is not an easy task. This research proposes a modified PAM for detecting outliers. The proposed technique has been implemented in JAVA. The results produced by the proposed technique are found better than existing technique in terms of outliers detected and time complexity.
Nguyen, Lien B; Nguyen, Anh V; Ling, Sai Ho; Nguyen, Hung T
Hypoglycemia is the most common but highly feared complication induced by the intensive insulin therapy in patients with type 1 diabetes mellitus (T1DM). Nocturnal hypoglycemia is dangerous because sleep obscures early symptoms and potentially leads to severe episodes which can cause seizure, coma, or even death. It is shown that the hypoglycemia onset induces early changes in electroencephalography (EEG) signals which can be detected non-invasively. In our research, EEG signals from five T1DM patients during an overnight clamp study were measured and analyzed. By applying a method of feature extraction using Fast Fourier Transform (FFT) and classification using neural networks, we establish that hypoglycemia can be detected efficiently using EEG signals from only two channels. This paper demonstrates that by implementing a training process of combining genetic algorithm and Levenberg-Marquardt algorithm, the classification results are improved markedly up to 75% sensitivity and 60% specificity on a separate testing set.
ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological and...is no longer needed. Do not return it to the originator. ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy ...4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques 5a. CONTRACT NUMBER
Rustam, Z.; Talita, A. S.
Intrusion Detection System (IDS) is an essential part of security systems to strengthen the security of information systems. IDS can be used to detect the abuse by intruders who try to get into the network system in order to access and utilize the available data sources in the system. There are two approaches of IDS, Misuse Detection and Anomaly Detection (behavior-based intrusion detection). Fuzzy clustering-based methods have been widely used to solve Anomaly Detection problems. Other than using fuzzy membership concept to determine the object to a cluster, other approaches as in combining fuzzy and possibilistic membership or feature-weighted based methods are also used. We propose Fuzzy Kernel k-Medoids that combining fuzzy and possibilistic membership as a powerful method to solve anomaly detection problem since on numerical experiment it is able to classify IDS benchmark data into five different classes simultaneously. We classify IDS benchmark data KDDCup'99 data set into five different classes simultaneously with the best performance was achieved by using 30 % of training data with clustering accuracy reached 90.28 percent.
A. V. Chernov
Full Text Available Security monitoring and incident management systems have become the main research focus in the area of intelligent railway control systems. In this work, we discuss a system architecture of multilevel intelligent control system in Russian Railway transport and security incident classification and the handling of theprocess. We make a detailed explanation of problems and tasks of security information and event management system as an important part of a multilevel intelligent control system. We use a rough sets theory to detect an abnormal activity in the considered system. Our main result consists in the development of simple and fast detection techniques that are based on rough sets theory and allow investigating a new type of incidents.
Full Text Available Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler. We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1 resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak
Nguyen The Cuong
Full Text Available Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.
Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.
An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.
Full Text Available Results of investigation of face detection algorithms efficiency in the banking client visual verification system are presented. The video recordings were made in real conditions met in three bank operating outlets employing a miniature industrial USB camera. The aim of the experiments was to check the practical usability of the face detection method in the biometric bank client verification system. The main assumption was to provide a simplified as much as possible user interaction with the application. Applied algorithms for face detection are described and achieved results of face detection in the real bank environment conditions are presented. Practical limitations of the application based on encountered problems are discussed.
Full Text Available Detection of messenger Ribonucleic Acid (mRNA) spots in fluorescence microscopy images is of great importance for biologists seeking better understanding of cell functionality. Fluorescence microscopy and specific staining methods make biological...
Lee, Hoshik; Rusin, Craig G.; Lake, Douglas E.; Clark, Matthew T.; Guin, Lauren; Smoot, Terri J.; Paget-Brown, Alix O.; Vergales, Brooke D.; Kattwinkel, John; Moorman, J. Randall; Delos, John B.
Apnea of prematurity (AOP) is an important and common clinical problem, and is often the rate-limiting process in NICU discharge. Accurate detection of episodes of clinically important neonatal apnea using existing chest impedance monitoring is a clinical imperative. The technique relies on changes in impedance as the lungs fill with air, a high impedance substance. A potential confounder, however, is blood coursing through the heart. Thus the cardiac signal during apnea might be mistaken for breathing. We report here a new filter to remove the cardiac signal from the chest impedance that employs a novel resampling technique optimally suited to remove the heart rate signal, allowing improved apnea detection. We also develop an apnea detection method that employs the chest impedance after cardiac filtering. The method has been applied to a large database of physiological signals, and we prove that, compared to the presently-used monitors, the new method gives substantial improvement in apnea detection. PMID:22156193
Lee, Hoshik; Delos, John B; Rusin, Craig G; Lake, Douglas E; Guin, Lauren; Smoot, Terri J; Moorman, J Randall; Clark, Matthew T; Paget-Brown, Alix O; Vergales, Brooke D; Kattwinkel, John
Apnea of prematurity is an important and common clinical problem, and is often the rate-limiting process in NICU discharge. Accurate detection of episodes of clinically important neonatal apnea using existing chest impedance (CI) monitoring is a clinical imperative. The technique relies on changes in impedance as the lungs fill with air, a high impedance substance. A potential confounder, however, is blood coursing through the heart. Thus, the cardiac signal during apnea might be mistaken for breathing. We report here a new filter to remove the cardiac signal from the CI that employs a novel resampling technique optimally suited to remove the heart rate signal, allowing improved apnea detection. We also develop an apnea detection method that employs the CI after cardiac filtering. The method has been applied to a large database of physiological signals, and we prove that, compared to the presently used monitors, the new method gives substantial improvement in apnea detection. (paper)
National Aeronautics and Space Administration — The problem of distance-based outlier detection is difficult to solve efficiently in very large datasets because of potential quadratic time complexity. We address...
Gierałtowski, Jan; Ciuchciński, Kamil; Grzegorczyk, Iga; Kośna, Katarzyna; Soliński, Mateusz; Podziemski, Piotr
Current gold-standard algorithms for heart beat detection do not work properly in the case of high noise levels and do not make use of multichannel data collected by modern patient monitors. The main idea behind the method presented in this paper is to detect the most prominent part of the QRS complex, i.e. the RS slope. We localize the RS slope based on the consistency of its characteristics, i.e. adequate, automatically determined amplitude and duration. It is a very simple and non-standard, yet very effective, solution. Minor data pre-processing and parameter adaptations make our algorithm fast and noise-resistant. As one of a few algorithms in the PhysioNet/Computing in Cardiology Challenge 2014, our algorithm uses more than two channels (i.e. ECG, BP, EEG, EOG and EMG). Simple fundamental working rules make the algorithm universal: it is able to work on all of these channels with no or only little changes. The final result of our algorithm in phase III of the Challenge was 86.38 (88.07 for a 200 record test set), which gave us fourth place. Our algorithm shows that current standards for heart beat detection could be improved significantly by taking a multichannel approach. This is an open-source algorithm available through the PhysioNet library.
Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 2268502 (Japan)
An iterative algorithm for the multiuser detection problem that arises in code division multiple access (CDMA) systems is developed on the basis of Pearl's belief propagation (BP). We show that the BP-based algorithm exhibits nearly optimal performance in a practical time scale by utilizing the central limit theorem and self-averaging property appropriately, whereas direct application of BP to the detection problem is computationally difficult and far from practical. We further present close relationships of the proposed algorithm to the Thouless-Anderson-Palmer approach and replica analysis known in spin-glass research.
Kim, Jung Taek; Park, Jae Chang; Lee, Jung Woon; Kim, Kyung Youn; Lee, In Soo; Kim, Bong Seok; Kang, Sook In
It is important to note that an effective means to assure the reliability and security for the nuclear power plant is to detect and diagnose the faults (failures) as soon and as accurately as possible. The objective of the project is to develop model-based fault detection and diagnosis algorithm for the pressurized water reactor and evaluate the performance of the developed algorithm. The scope of the work can be classified into two categories. The one is state-space model-based FDD algorithm based on the interacting multiple model (IMM) algorithm. The other is input-output model-based FDD algorithm based on the ART neural network. Extensive computer simulations are carried out to evaluate the performance in terms of speed and accuracy
Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve
A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.
algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.
In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...
Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran
Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm.
Full Text Available Traversal time and hop count analysis (TTHCA is a recent wormhole detection algorithm for mobile ad hoc networks (MANET which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm.
The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.
Li, Sheng; Garrett-Bakelman, Francine E; Akalin, Altuna; Zumbo, Paul; Levine, Ross; To, Bik L; Lewis, Ian D; Brown, Anna L; D'Andrea, Richard J; Melnick, Ari; Mason, Christopher E
DNA methylation profiling reveals important differentially methylated regions (DMRs) of the genome that are altered during development or that are perturbed by disease. To date, few programs exist for regional analysis of enriched or whole-genome bisulfate conversion sequencing data, even though such data are increasingly common. Here, we describe an open-source, optimized method for determining empirically based DMRs (eDMR) from high-throughput sequence data that is applicable to enriched whole-genome methylation profiling datasets, as well as other globally enriched epigenetic modification data. Here we show that our bimodal distribution model and weighted cost function for optimized regional methylation analysis provides accurate boundaries of regions harboring significant epigenetic modifications. Our algorithm takes the spatial distribution of CpGs into account for the enrichment assay, allowing for optimization of the definition of empirical regions for differential methylation. Combined with the dependent adjustment for regional p-value combination and DMR annotation, we provide a method that may be applied to a variety of datasets for rapid DMR analysis. Our method classifies both the directionality of DMRs and their genome-wide distribution, and we have observed that shows clinical relevance through correct stratification of two Acute Myeloid Leukemia (AML) tumor sub-types. Our weighted optimization algorithm eDMR for calling DMRs extends an established DMR R pipeline (methylKit) and provides a needed resource in epigenomics. Our method enables an accurate and scalable way of finding DMRs in high-throughput methylation sequencing experiments. eDMR is available for download at http://code.google.com/p/edmr/.
Bonham, Paul; Iqbal, Azlan
We describe a general method of detecting valid chains or links of pieces on a two-dimensional grid. Specifically, using the example of the chess variant known as Switch-Side Chain-Chess (SSCC). Presently, no foolproof method of detecting such chains in any given chess position is known and existing graph theory, to our knowledge, is unable to fully address this problem either. We therefore propose a solution implemented and tested using the C++ programming language. We have been unable to fi...
Full Text Available The preceding vehicles detection technique in nighttime traffic scenes is an important part of the advanced driver assistance system (ADAS. This paper proposes a region tracking-based vehicle detection algorithm via the image processing technique. First, the brightness of the taillights during nighttime is used as the typical feature, and we use the existing global detection algorithm to detect and pair the taillights. When the vehicle is detected, a time series analysis model is introduced to predict vehicle positions and the possible region (PR of the vehicle in the next frame. Then, the vehicle is only detected in the PR. This could reduce the detection time and avoid the false pairing between the bright spots in the PR and the bright spots out of the PR. Additionally, we present a thresholds updating method to make the thresholds adaptive. Finally, experimental studies are provided to demonstrate the application and substantiate the superiority of the proposed algorithm. The results show that the proposed algorithm can simultaneously reduce both the false negative detection rate and the false positive detection rate.
Susan Shin-Jung Lee
Full Text Available Predicting the risk of tuberculosis (TB in people living with HIV (PLHIV using a single test is currently not possible. We aimed to develop and validate a clinical algorithm, using baseline CD4 cell counts, HIV viral load (pVL, and interferon-gamma release assay (IGRA, to identify PLHIV who are at high risk for incident active TB in low-to-moderate TB burden settings where highly active antiretroviral therapy (HAART is routinely provided.A prospective, 5-year, cohort study of adult PLHIV was conducted from 2006 to 2012 in two hospitals in Taiwan. HAART was initiated based on contemporary guidelines (CD4 count < = 350/μL. Cox regression was used to identify the predictors of active TB and to construct the algorithm. The validation cohorts included 1455 HIV-infected individuals from previous published studies. Area under the receiver operating characteristic (ROC curve was calculated.Seventeen of 772 participants developed active TB during a median follow-up period of 5.21 years. Baseline CD4 < 350/μL or pVL ≥ 100,000/mL was a predictor of active TB (adjusted HR 4.87, 95% CI 1.49-15.90, P = 0.009. A positive baseline IGRA predicted TB in patients with baseline CD4 ≥ 350/μL and pVL < 100,000/mL (adjusted HR 6.09, 95% CI 1.52-24.40, P = 0.01. Compared with an IGRA-alone strategy, the algorithm improved the sensitivity from 37.5% to 76.5%, the negative predictive value from 98.5% to 99.2%. Compared with an untargeted strategy, the algorithm spared 468 (60.6% from unnecessary TB preventive treatment. Area under the ROC curve was 0.692 (95% CI: 0.587-0.798 for the study cohort and 0.792 (95% CI: 0.776-0.808 and 0.766 in the 2 validation cohorts.A validated algorithm incorporating the baseline CD4 cell count, HIV viral load, and IGRA status can be used to guide targeted TB preventive treatment in PLHIV in low-to-moderate TB burden settings where HAART is routinely provided to all PLHIV. The implementation of this algorithm will avoid unnecessary
Nomura, Chie; Masayama, Atsushi; Yamaguchi, Mizuka; Sakuma, Daisuke; Kajimura, Keiji
In this study, species-specific identification of five toxic mushrooms, Chlorophyllum molybdites, Gymnopilus junonius, Hypholoma fasciculare, Pleurocybella porrigens, and Tricholoma ustale, which have been involved in food-poisoning incidents in Japan, was investigated. Specific primer pairs targeting internal transcribed spacer (ITS) regions were designed for PCR detection. The specific amplicons were obtained from fresh, cooked, and simulated gastric fluid (SGF)-treated samples. No amplicons were detected from other mushrooms with similar morphology. Our method using one-step extraction of mushrooms allows rapid detection within 2.5 hr. It could be utilized for rapid identification or screening of toxic mushrooms.
Nov 24, 2017 ... Abstract. Land cover change detection has been a topic of active research in the remote sensing community. Due to enormous amount of data available from satellites, it has attracted the attention of data mining researchers to search a new direction for solution. The Terra Moderate Resolution Imaging ...
The objective of feature selection is to find the most relevant features for classification. Thus, the dimensionality of the information will be reduced and may improve classification's accuracy. This paper proposed a minimum set of relevant questions that can be used for early detection of dyslexia. In this research, we ...
Land cover change detection has been a topic of active research in the remote sensing community. Due to enormous amount of data available from satellites, it has attracted the attention of data mining researchers to search a new direction for solution. The Terra Moderate Resolution Imaging Spectrometer(MODIS) ...
Shuxin, Li; Zhilong, Zhang; Biao, Li
Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.
Full Text Available The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m2 and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m2, only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m2, the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection.
Lynch, Kristine E.; Mumford, Sunni L.; Schliep, Karen C.; Whitcomb, Brian W.; Zarek, Shvetha M.; Pollack, Anna Z; Bertone-Johnson, Elizabeth R.; Danaher, Michelle; Wactawski-Wende, Jean; Gaskins, Audrey J.; Schisterman, Enrique F.
Objective To compare previously used algorithms to identify anovulatory menstrual cycles in women self-reporting regular menses. Design Prospective cohort study Setting Western New York Study participants 259 healthy, regularly menstruating women followed for one (n=9) or two (n=250) menstrual cycles (2005–2007). Intervention(s) None. Main Outcome Measure(s) Prevalence of sporadic anovulatory cycles identified using eleven previously defined algorithms that utilize estradiol, progesterone, and luteinizing hormone (LH) concentrations. Result(s) Algorithms based on serum LH, estradiol, and progesterone levels detected a prevalence of anovulation across the study period of 5.5% to 12.8% (concordant classification for 91.7% to 97.4% of cycles). The prevalence of anovulatory cycles varied from 3.4% to 18.6% using algorithms based on urinary LH alone or with the primary estradiol metabolite, estrone-3-glucuronide (E3G), levels. Conclusion(s) The prevalence of anovulatory cycles among healthy women varied by algorithm. Mid-cycle LH surge urine-based algorithms used in over-the-counter fertility monitors tended to classify a higher proportion of anovulatory cycles compared to luteal phase progesterone serum-based algorithms. Our study demonstrates that algorithms based on the LH surge, or in conjunction with E3G, potentially estimate a higher percentage of anovulatory episodes. Addition of measurements of post-ovulatory serum progesterone or urine pregnanediol may aid in detecting ovulation. PMID:24875398
W. Wang; J.J. Qu; X. Hao; Y. Liu
In the southeastern United States, most wildland fires are of low intensity. Asubstantial number of these fires cannot be detected by the MODIS contextual algorithm. Toimprove the accuracy of fire detection for this region, the remote-sensed characteristics ofthese fires have to be systematically...
Caorsi, Salvatore; Massa, Andrea; Pastorino, Matteo; Raffetto, Mirco; Randazzo, Andrea
The application of a global optimization procedure to the detection of buried inhomogeneities is studied in the present paper. The object inhomogeneities are schematized as multilayer infinite dielectric cylinders with elliptic cross sections. An efficient recursive analytical procedure is used for the forward scattering computation. A functional is constructed in which the field is expressed in series solution of Mathieu functions. Starting by the input scattered data, the iterative minimiza...
Zhou, Hong; Burkom, Howard; Winston, Carla A; Dey, Achintya; Ajani, Umed
National syndromic surveillance systems require optimal anomaly detection methods. For method performance comparison, we injected multi-day signals stochastically drawn from lognormal distributions into time series of aggregated daily visit counts from the U.S. Centers for Disease Control and Prevention's BioSense syndromic surveillance system. The time series corresponded to three different syndrome groups: rash, upper respiratory infection, and gastrointestinal illness. We included a sample of facilities with data reported every day and with median daily syndromic counts ⩾1 over the entire study period. We compared anomaly detection methods of five control chart adaptations, a linear regression model and a Poisson regression model. We assessed sensitivity and timeliness of these methods for detection of multi-day signals. At a daily background alert rate of 1% and 2%, the sensitivities and timeliness ranged from 24 to 77% and 3.3 to 6.1days, respectively. The overall sensitivity and timeliness increased substantially after stratification by weekday versus weekend and holiday. Adjusting the baseline syndromic count by the total number of facility visits gave consistently improved sensitivity and timeliness without stratification, but it provided better performance when combined with stratification. The daily syndrome/total-visit proportion method did not improve the performance. In general, alerting based on linear regression outperformed control chart based methods. A Poisson regression model obtained the best sensitivity in the series with high-count data. Published by Elsevier Inc.
Gude, A.; Maraschek, M.; Kardaun, O.; the ASDEX Upgrade Team
A sawtooth crash algorithm that can automatically detect irregular sawteeth with strongly varying crash characteristics, including inverted crashes with central signal increase, has been developed. Such sawtooth behaviour is observed in ASDEX Upgrade with its tungsten wall, especially in phases with central ECRH. This application of ECRH for preventing impurity accumulation is envisaged also for ITER. The detection consists of three steps: a sensitive edge detection, a multichannel combination to increase detection performance, and a profile analysis that tests generic sawtooth crash features. The effect of detection parameters on the edge detection results has been investigated using synthetic signals and tested in an application to ASDEX Upgrade soft x-ray data.
Muchtadi-Alamsyah, Intan; Akbari Utomo, Taufiq
The security of Elliptic Curve Cryptography depends on how to solve the Elliptic Curve Cryptography Discrete Logarithm Problem (ECDLP). In this paper we propose the use of modified Pollard Rho Algorithm by using Brent Cycle Detection Algorithm to solve the ECDLP. We give performance comparison on time and the number of iterations between Pollard Rho with Brent Cycle Detection and Pollard Rho with Negation map. In particular, for Koblitz curve, we also give comparison between Pollard Rho with Brent Cycle Detection and Pollard Rho with Negation and Frobenius maps.
Lunar, Maja M; Matković, Ivana; Tomažič, Janez; Vovko, Tomaž D; Pečavar, Blaž; Poljak, Mario
Resolving dilemma whether the rise in the number of HIV diagnoses represents an actual increase in HIV transmissions or is a result of improved HIV surveillance is crucial before implementing national HIV prevention strategies. Annual proportions of recent infections (RI) among newly diagnosed persons infected with HIV-1 in Slovenia during 27 years (1986-2012) were determined using an algorithm consisting of routine baseline CD4 and HIV viral load measurements and the Aware BED EIA HIV-1 Incidence Test (BED test). The study included the highest coverage of persons diagnosed with HIV during the entire duration of an HIV epidemic in a given country/region (71%). Out of 416 patients, 170 (40.9%) had a baseline CD4 cell count less than 200 cells/mm(3) and/or HIV-1 viral load less than 400 copies/ml and were characterized as having a long-standing infection (LSI). The remaining 246 patients were additionally tested using the BED test. Overall, 23% (97/416) of the patients were labeled RI. The characteristics significantly associated with RI were as follows: younger age, acute retroviral syndrome, CDC class A and other than C, no AIDS defining illnesses, HIV test performed in the past, a higher viral load, and a higher CD4 cell count. An interesting trend in the proportion of RI was observed, with a peak in 2005 (47% of RI) and the lowest point in 2008 (12%) in parallel with a rise in the numbers of new HIV diagnoses. This study could help promote the idea of introducing periodic HIV incidence monitoring using a simple and affordable algorithm. © 2015 Wiley Periodicals, Inc.
Full Text Available Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.
Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian
There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: email@example.com.
Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert
Melanoma is one of the most rapidly accelerating cancers in the world . Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error . Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method  by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.
Chacon Murguia, Mario I.; Valdez Martinez, Antonio
This paper presents a new algorithm for object motion detection and trajectory tracking. This method was developed as part of a machine vision system for human fertility analysis. Fertility analysis is based on the amount of spermatozoa in semen samples and their type of movement. Two approaches were tested to detect the movement of the spermatozoa, image subtraction, and optical flow. Image subtraction is a simple and fast method but it has some complications to detect individual motion when large amounts of objects are presented. The optical flow method is able to detect motion but it turns to be computationally time expensive. It does not generate a specific trajectory of each spermatozoon, and it does not detect static spermatozoa. The algorithm developed detects object motion through an orthogonal search of blocks in consecutive frames. Matching of two blocks in consecutive frames is defined by square differences. A dynamic control array is used to store the trajectory of each spermatozoon, and to deal with all the different situations in the trajectories like, new spermatozoa entering in a frame, spermatozoa leaving the frame, and spermatozoa collision. The algorithm developed turns out to be faster than the optical flow algorithm and solves the problem of the image subtraction method. It also detects static spermatozoa, and generates a motion vector for each spermatozoon that describes their trajectory.
Balaji Ramachandran, Supriya; Gillis, Kevin D
Electrochemical microelectrodes located immediately adjacent to the cell surface can detect spikes of amperometric current during exocytosis as the transmitter released from a single vesicle is oxidized on the electrode surface. Automated techniques to detect spikes are needed in order to quantify the spike rate as a measure of the rate of exocytosis. We have developed a Matched Filter (MF) detection algorithm that scans the data set with a library of prototype spike templates while performing a least-squares fit to determine the amplitude and standard error. The ratio of the fit amplitude to the standard error constitutes a criterion score that is assigned for each time point and for each template. A spike is detected when the criterion score exceeds a threshold and the highest-scoring template and the time of peak score is identified. The search for the next spike commences only after the score falls below a second, lower threshold to reduce false positives. The approach was extended to detect spikes with double-exponential decays with the sum of two templates. Receiver Operating Characteristic plots (ROCs) demonstrate that the algorithm detects >95% of manually identified spikes with a false-positive rate of ∼2%. ROCs demonstrate that the MF algorithm performs better than algorithms that detect spikes based on a derivative-threshold approach. The MF approach performs well and leads into approaches to identify spike parameters. Copyright © 2017 Elsevier B.V. All rights reserved.
Galushko, V. G.; Vavriv, D. M.
Purpose: Efficiency analysis of an optimal algorithm of chirp signal processing based on the chirplet transform as applied to detection of radar targets in uniformly accelerated motion. Design/methodology/approach: Standard methods of the optimal filtration theory are used to investigate the ambiguity function of chirp signals. Findings: An analytical expression has been derived for the ambiguity function of chirp signals that is analyzed with respect to detection of radar targets moving at a constant acceleration. Sidelobe level and characteristic width of the ambiguity function with respect to the coordinates frequency and rate of its change have been estimated. The gain in the signal-to-noise ratio has been assessed that is provided by the algorithm under consideration as compared with application of the standard Fourier transform to detection of chirp signals against a “white” noise background. It is shown that already with a comparatively small (processing channels (elementary filters with respect to the frequency change rate) the gain in the signal-tonoise ratio exceeds 10 dB. A block diagram of implementation of the algorithm under consideration is suggested on the basis of a multichannel weighted Fourier transform. Recommendations as for selection of the detection algorithm parameters have been developed. Conclusions: The obtained results testify to efficiency of application of the algorithm under consideration to detection of radar targets moving at a constant acceleration. Nevertheless, it seems expedient to perform computer simulations of its operability with account for the noise impact along with trial measurements in real conditions.
Full Text Available Boosted by health consequences and the cost of falls in the elderly, this work develops and tests a novel algorithm and methodology to detect human impacts that will act as triggers of a two-layer fall monitor. The two main requirements demanded by socio-healthcare providers—unobtrusiveness and reliability—defined the objectives of the research. We have demonstrated that a very agile, adaptive, and energy-based anisotropic algorithm can provide 100% sensitivity and 78% specificity, in the task of detecting impacts under demanding laboratory conditions. The algorithm works together with an unsupervised real-time learning technique that addresses the adaptive capability, and this is also presented. The work demonstrates the robustness and reliability of our new algorithm, which will be the basis of a smart falling monitor. This is shown in this work to underline the relevance of the results.
Full Text Available This paper presents a novel maximum margin clustering method with immune evolution (IEMMC for automatic diagnosis of electrocardiogram (ECG arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias.
Lee, Gihyoun; Na, Sung Dae; Cho, Jin-Ho; Kim, Myoung Nam
This paper presents a voice activity detection (VAD) approach using a perceptual wavelet entropy neighbor slope (PWENS) in a low signal-to-noise (SNR) environment and with a variety of noise types. The basis for our study is to use acoustic features that have large entropy variance for each wavelet critical band. The speech signal is decomposed by the proposed perceptual wavelet packet decomposition (PWPD), and the VAD function is extracted by PWENS. Finally, VAD is decided by the proposed VAD decision rule using two memory buffers. In order to evaluate the performance of the VAD decision, many speech samples and a variety of SNR conditions were used in the experiment. The performance of the VAD decision is confirmed using objective indexes such as a graph of the VAD decision and the relative error rate.
Svärd, Mikael; Rumman, Philip
The purpose of this study is to examine the possibility of accurately distinguishing fabricated news from authentic news stories using Naive Bayes classification algorithms. This involves a comparative study of two different machine learning classification algorithms. The work also contains an overview of how linguistic text analytics can be utilized in detection purposes and an attempt to extract interesting information was made using Word Frequencies. A discussion of how different actors an...
normal distribution, we developed the expectation maximization (EM algorithm to estimate the position of mean change-point. We investigated the performance of the algorithm through different simulations, finding that our methods is robust to the distributions of errors and is effective to estimate the position of mean change-point. Finally, we applied our method to the classical Holbert data and detected a change-point.
Goldstein, Markus; Uchida, Seiichi
Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601
Cimbálník, Jan; Hewitt, Angela; Worrell, Greg; Stead, Matt
High frequency oscillations (HFOs) are emerging as potentially clinically important biomarkers for localizing seizure generating regions in epileptic brain. These events, however, are too frequent, and occur on too small a time scale to be identified quickly or reliably by human reviewers. Many of the deficiencies of the HFO detection algorithms published to date are addressed by the CS algorithm presented here. The algorithm employs novel methods for: 1) normalization; 2) storage of parameters to model human expertise; 3) differentiating highly localized oscillations from filtering phenomena; and 4) defining temporal extents of detected events. Receiver-operator characteristic curves demonstrate very low false positive rates with concomitantly high true positive rates over a large range of detector thresholds. The temporal resolution is shown to be +/-∼5ms for event boundaries. Computational efficiency is sufficient for use in a clinical setting. The algorithm performance is directly compared to two established algorithms by Staba (2002) and Gardner (2007). Comparison with all published algorithms is beyond the scope of this work, but the features of all are discussed. All code and example data sets are freely available. The algorithm is shown to have high sensitivity and specificity for HFOs, be robust to common forms of artifact in EEG, and have performance adequate for use in a clinical setting. Copyright © 2017 Elsevier B.V. All rights reserved.
Carlos J. Corrada Bravo
Full Text Available We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.
Full Text Available Denial-of-sleep (DoSL attack is a special category of denial-of-service attack that prevents the battery powered sensor nodes from going into the sleep mode, thus affecting the network performance. The existing schemes used for the DoSL attack detection do not provide an optimal energy conservation and key pairing operation. Hence, in this paper, an efficient Genetic Algorithm (GA based denial-of-sleep attack detection (GA-DoSLD algorithm is suggested for analyzing the misbehaviors of the nodes. The suggested algorithm implements a Modified-RSA (MRSA algorithm in the base station (BS for generating and distributing the key pair among the sensor nodes. Before sending/receiving the packets, the sensor nodes determine the optimal route using Ad Hoc On-Demand Distance Vector Routing (AODV protocol and then ensure the trustworthiness of the relay node using the fitness calculation. The crossover and mutation operations detect and analyze the methods that the attackers use for implementing the attack. On determining an attacker node, the BS broadcasts the blocked information to all the other sensor nodes in the network. Simulation results prove that the suggested algorithm is optimal compared to the existing algorithms such as X-MAC, ZKP, and TE2P schemes.
Chandola, Varun [ORNL; Vatsavai, Raju [ORNL
Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).
Jung, Woo Sik
A module or independent subtree is a part of a fault tree whose child gates or basic events are not repeated in the remaining part of the fault tree. Modules are necessarily employed in order to reduce the computational costs of fault tree quantification. This paper presents a new linear time algorithm to detect modules of large fault trees. The size of cut sets can be substantially reduced by replacing independent subtrees in a fault tree with super-components. Chatterjee and Birnbaum developed properties of modules, and demonstrated their use in the fault tree analysis. Locks expanded the concept of modules to non-coherent fault trees. Independent subtrees were manually identified while coding a fault tree for computer analysis. However, nowadays, the independent subtrees are automatically identified by the fault tree solver. A Dutuit and Rauzy (DR) algorithm to detect modules of a fault tree for coherent or non-coherent fault tree was proposed in 1996. It has been well known that this algorithm quickly detects modules since it is a linear time algorithm. The new algorithm minimizes computational memory and quickly detects modules. Furthermore, it can be easily implemented into industry fault tree solvers that are based on traditional Boolean algebra, binary decision diagrams (BDDs), or Zero-suppressed BDDs. The new algorithm employs only two scalar variables in Eqs. to that are volatile information. After finishing the traversal and module detection of each node, the volatile information is destroyed. Thus, the new algorithm does not employ any other additional computational memory and operations. It is recommended that this method be implemented into fault tree solvers for efficient probabilistic safety assessment (PSA) of nuclear power plants
Bejnordi, Babak Ehteshami; Veta, Mitko; Van Diest, Paul Johannes; Van Ginneken, Bram; Karssemeijer, Nico; Litjens, Geert; van der Laak, Jeroen A W M; Hermsen, Meyke; Manson, Quirine F.; Balkenhol, Maschenka; Geessink, Oscar; Stathonikos, Nikolaos; van Dijk, Marcory C R F; Bult, Peter; Beca, Francisco; Beck, Andrew H.; Wang, Dayong; Khosla, Aditya; Gargeya, Rishab; Irshad, Humayun; Zhong, Aoxiao; Dou, Qi; Li, Quanzheng; Chen, Hao; Lin, Huang Jing; Heng, Pheng-Ann; Haß, Christian; Bruni, Elia; Wong, Quincy; Halici, Ugur; Öner, Mustafa Ümit; Cetin-Atalay, Rengul; Berseth, Matt; Khvatkov, Vitali; Vylegzhanin, Alexei; Kraus, Oren; Shaban, Muhammad; Rajpoot, Nasir M.; Awan, Ruqayya; Sirinukunwattana, Korsuk; Qaiser, Talha; Tsang, Yee Wah; Tellez, David; Annuscheit, Jonas; Hufnagl, Peter; Valkonen, Mira; Kartasalo, Kimmo; Latonen, Leena; Ruusuvuori, Pekka; Liimatainen, Kaisa; Albarqouni, Shadi; Mungal, Bharti; George, Ami; Demirci, Stefanie; Navab, Nassir; Watanabe, Seiryo; Seno, Shigeto; Takenaka, Yoichi; Matsuda, Hideo; Phoulady, Hady Ahmady; Kovalev, Vassili; Kalinovsky, Alexander; Liauchuk, Vitali; Bueno, Gloria; Fernandez-Carrobles, M. Milagro; Serrano, Ismael; Deniz, Oscar; Racoceanu, Daniel; Venâncio, Rui
IMPORTANCE: Application of deep learning algorithms to whole-slide pathology imagescan potentially improve diagnostic accuracy and efficiency. OBJECTIVE: Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph
Karabiber, Fethullah; Arik, Sabri
Bi-i (Bio-inspired) Cellular Vision system is built mainly on Cellular Neural /Nonlinear Networks (CNNs) type (ACE16k) and Digital Signal Processing (DSP) type microprocessors. CNN theory proposed by Chua has advanced properties for image processing applications. In this study, the edge detection algorithms are implemented on the Bi-i Cellular Vision System. Extracting the edge of an image to be processed correctly and fast is of crucial importance for image processing applications. Threshold Gradient based edge detection algorithm is implemented using ACE16k microprocessor. In addition, pre-processing operation is realized by using an image enhancement technique based on Laplacian operator. Finally, morphologic operations are performed as post processing operations. Sobel edge detection algorithm is performed by convolving sobel operators with the image in the DSP. The performances of the edge detection algorithms are compared using visual inspection and timing analysis. Experimental results show that the ACE16k has great computational power and Bi-i Cellular Vision System is very qualified to apply image processing algorithms in real time.
In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...
algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...
Lyons, Imogen; Blandford, Ann
Complex medical devices such as infusion pumps are increasingly being used in patients' homes with little known about the impact on patient safety. Our aim was to better understand the risks to patient safety in this situation and how these risks might be minimised, by reference to incident reports. We identified 606 records of incidents associated with infusion devices that had occurred in a private home and were reported to the UK National Reporting and Learning Service (2005-2015 inclusive). We used thematic analysis to identify key themes. In this paper we focus on two emergent themes: detecting and diagnosing incidents; and locating the patient, lay caregivers and their family in incident reports. The majority of incidents were attributed to device malfunction, and resulted in the patient being under-dosed. Delays in recognising and responding to problems were identified, alongside challenges in identifying the cause. We propose a process model for fault diagnosis and correction. Patients and caregivers did not feature strongly in reports; we highlight how the device is in the home but of the care system, and propose an agent model to describe this; we also identify ways of mitigating this disjoint. Devices need to be appropriately tailored to the setting in which they are employed, and within a system of care that ensures they are used optimally and safely. Suggested features to improve patient safety include devices that can provide better feedback to identify problems and support resolution, alongside greater monitoring and technical support by care providers for both patients and frontline professionals. The proposed process and agent models provide a structure for reviewing safety and learning from incidents in home health care. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Full Text Available This paper proposes two edge detection methods for medical images by integrating the advantages of Gabor wavelet transform (GWT and unsupervised clustering algorithms. The GWT is used to enhance the edge information in an image while suppressing noise. Following this, the k-means and Fuzzy c-means (FCM clustering algorithms are used to convert a gray level image into a binary image. The proposed methods are tested using medical images obtained through Computed Tomography (CT and Magnetic Resonance Imaging (MRI devices, and a phantom image. The results prove that the proposed methods are successful for edge detection, even in noisy cases.
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter.
Full Text Available Background/Purpose. In terms of the detection of tooth diagnosis, no intelligent detection has been done up till now. Dentists just look at images and then they can detect the diagnosis position in tooth based on their experiences. Using new technologies, scientists will implement detection and repair of tooth diagnosis intelligently. In this paper, we have introduced one intelligent method for detection using particle swarm optimization (PSO and our mathematical formulation. This method was applied to 2D special images. Using developing of our method, we can detect tooth diagnosis for all of 2D and 3D images. Materials and Methods. In recent years, it is possible to implement intelligent processing of images by high efficiency optimization algorithms in many applications especially for detection of dental caries and restoration without human intervention. In the present work, we explain PSO algorithm with our detection formula for detection of dental caries and restoration. Also image processing helped us to implement our method. And to do so, pictures taken by digital radiography systems of tooth are used. Results and Conclusion. We implement some mathematics formula for fitness of PSO. Our results show that this method can detect dental caries and restoration in digital radiography pictures with the good convergence. In fact, the error rate of this method was 8%, so that it can be implemented for detection of dental caries and restoration. Using some parameters, it is possible that the error rate can be even reduced below 0.5%.
A F M Saifuddin Saif
Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs) remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA). Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.
Kim, Myeongkyu; Lee, Donghun
It is well known that, to locate humans in GPS-denied environments, a lower limb kinematic solution based on Inertial Measurement Unit (IMU), force plate, and pressure insoles is essential. The force plate and pressure insole are used to detect foot-ground contacts. However, the use of multiple sensors is not desirable in most cases. This paper documents the development of an IMU-based FGCD (foot-ground contact detection) algorithm considering the variations of both walking terrain and speed. All IMU outputs showing significant changes on the moments of foot-ground contact phases are fully identified through experiments in five walking terrains. For the experiment on each walking terrain, variations of walking speeds are also examined to confirm the correlations between walking speed and the main parameters in the FGCD algorithm. As experimental results, FGCD algorithm successfully detecting four contact phases is developed, and validation of performance of the FGCD algorithm is also implemented. Practitioner Summary: In this research, it was demonstrated that the four contact phases of Heel strike (or Toe strike), Full contact, Heel off and Toe off can be independently detected regardless of the walking speed and walking terrain based on the detection criteria composed of the ranges and the rates of change of the main parameters measured from the Inertial Measurement Unit sensors.
Flach, Milan; Gans, Fabian; Brenning, Alexander; Denzler, Joachim; Reichstein, Markus; Rodner, Erik; Bathiany, Sebastian; Bodesheim, Paul; Guanche, Yanira; Sippel, Sebastian; Mahecha, Miguel D.
Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction) is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach) and their combinations (ensembles) that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to automatically detect anomalies
Wanting Wang; John J. Qu; Xianjun Hao; Yongqiang Liu; William T. Sommers
Traditional fire detection algorithms mainly rely on hot spot detection using thermal infrared (TIR) channels with fixed or contextual thresholds. Three solar reflectance channels (0.65 μm, 0.86 μm, and 2.1 μm) were recently adopted into the MODIS version 4 contextual algorithm to improve the active fire detection. In the southeastern United...
Full Text Available In order to improve the performance of non-binary low-density parity check codes (LDPC hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER of 10−5 over an additive white Gaussian noise (AWGN channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Qian, Jinfang; Zhang, Changjiang
An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.
Full Text Available To accurately achieve side scan sonar (SSS image target detection, a novel target detection algorithm based on a neutrosophic set (NS and diffusion maps (DMs is proposed in this paper. Firstly, the neutrosophic subset images were obtained by transforming the input SSS image into the NS domain. Secondly, the shadowed areas of the SSS image were detected using the single gray value threshold method before the diffusion map was calculated. Lastly, based on the diffusion map, the target areas were detected using the improved target scoring equation defined by the diffusion distance and texture feature. The experiments using SSS images of single clear and unclear targets, with or without shadowed areas, showed that the algorithm accurately detects targets. Experiments using SSS images of multiple targets, with or without shadowed areas, showed that no false or missing detections occurred. The target areas were also accurately detected in SSS images with complex features such as sand wave terrain. The accuracy and effectiveness of the proposed algorithm were assessed.
Enki, Doyo G; Garthwaite, Paul H; Farrington, C Paddy; Noufaily, Angela; Andrews, Nick J; Charlett, Andre
A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace.
Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre
A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749
Plimpton, Steven J.; Hendrickson, Bruce; Burns, Shawn P.; McLendon, William III; Rauchwerger, Lawrence
The method of discrete ordinates is commonly used to solve the Boltzmann transport equation. The solution in each ordinate direction is most efficiently computed by sweeping the radiation flux across the computational grid. For unstructured grids this poses many challenges, particularly when implemented on distributed-memory parallel machines where the grid geometry is spread across processors. We present several algorithms relevant to this approach: (a) an asynchronous message-passing algorithm that performs sweeps simultaneously in multiple ordinate directions, (b) a simple geometric heuristic to prioritize the computational tasks that a processor works on, (c) a partitioning algorithm that creates columnar-style decompositions for unstructured grids, and (d) an algorithm for detecting and eliminating cycles that sometimes exist in unstructured grids and can prevent sweeps from successfully completing. Algorithms (a) and (d) are fully parallel; algorithms (b) and (c) can be used in conjunction with (a) to achieve higher parallel efficiencies. We describe our message-passing implementations of these algorithms within a radiation transport package. Performance and scalability results are given for unstructured grids with up to 3 million elements (500 million unknowns) running on thousands of processors of Sandia National Laboratories' Intel Tflops machine and DEC-Alpha CPlant cluster
Ma, Tianren; Xia, Zhengyou
Currently, with the rapid development of information technology, the electronic media for social communication is becoming more and more popular. Discovery of communities is a very effective way to understand the properties of complex networks. However, traditional community detection algorithms consider the structural characteristics of a social organization only, with more information about nodes and edges wasted. In the meanwhile, these algorithms do not consider each node on its merits. Label propagation algorithm (LPA) is a near linear time algorithm which aims to find the community in the network. It attracts many scholars owing to its high efficiency. In recent years, there are more improved algorithms that were put forward based on LPA. In this paper, an improved LPA based on random walk and node importance (NILPA) is proposed. Firstly, a list of node importance is obtained through calculation. The nodes in the network are sorted in descending order of importance. On the basis of random walk, a matrix is constructed to measure the similarity of nodes and it avoids the random choice in the LPA. Secondly, a new metric IAS (importance and similarity) is calculated by node importance and similarity matrix, which we can use to avoid the random selection in the original LPA and improve the algorithm stability. Finally, a test in real-world and synthetic networks is given. The result shows that this algorithm has better performance than existing methods in finding community structure.
Full Text Available In this paper, we presented an image segmentation algorithm based on adaptive weighted mathematical morphology edge detectors. The performance of the proposed algorithm has been demonstrated on the Lena image. The input of the proposed algorithm is a grey level image. The image was first processed by the mathematical morphological closing and dilation residue edge detector to enhance the edge features and sketch out the contour of the image, respectively. Then the adaptive weight SE operation was applied to the edge-extracted image to fuse edge gaps and hill up holds. Experimental results show it can not only primely extract detail edge, but also superbly preserve integer effect comparative to classical edge detection algorithm.
Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.
Full Text Available INTRODUCTION: Biomarker-based cross-sectional incidence estimation requires a Recent Infection Testing Algorithm (RITA with an adequately large mean recency duration, to achieve reasonable survey counts, and a low false-recent rate, to minimise exposure to further bias and imprecision. Estimating these characteristics requires specimens from individuals with well-known seroconversion dates or confirmed long-standing infection. Specimens with well-known seroconversion dates are typically rare and precious, presenting a bottleneck in the development of RITAs. METHODS: The mean recency duration and a 'false-recent rate' are estimated from data on seroconverting blood donors. Within an idealised model for the dynamics of false-recent results, blood donor specimens were used to characterise RITAs by a new method that maximises the likelihood of cohort-level recency classifications, rather than modelling individual sojourn times in recency. RESULTS: For a range of assumptions about the false-recent results (0% to 20% of biomarker response curves failing to reach the threshold distinguishing test-recent and test-non-recent infection, the mean recency duration of the Vironostika-LS ranged from 154 (95% CI: 96-231 to 274 (95% CI: 234-313 days in the South African donor population (n = 282, and from 145 (95% CI: 67-226 to 252 (95% CI: 194-308 days in the American donor population (n = 106. The significance of gender and clade on performance was rejected (p-value = 10%, and utility in incidence estimation appeared comparable to that of a BED-like RITA. Assessment of the Vitros-LS (n = 108 suggested potentially high false-recent rates. DISCUSSION: The new method facilitates RITA characterisation using widely available specimens that were previously overlooked, at the cost of possible artefacts. While accuracy and precision are insufficient to provide estimates suitable for incidence surveillance, a low-cost approach for preliminary
Jennifer J Palmer
Full Text Available Active screening by mobile teams is considered the best method for detecting human African trypanosomiasis (HAT caused by Trypanosoma brucei gambiense but the current funding context in many post-conflict countries limits this approach. As an alternative, non-specialist health care workers (HCWs in peripheral health facilities could be trained to identify potential cases who need testing based on their symptoms. We explored the predictive value of syndromic referral algorithms to identify symptomatic cases of HAT among a treatment-seeking population in Nimule, South Sudan.Symptom data from 462 patients (27 cases presenting for a HAT test via passive screening over a 7 month period were collected to construct and evaluate over 14,000 four item syndromic algorithms considered simple enough to be used by peripheral HCWs. For comparison, algorithms developed in other settings were also tested on our data, and a panel of expert HAT clinicians were asked to make referral decisions based on the symptom dataset. The best performing algorithms consisted of three core symptoms (sleep problems, neurological problems and weight loss, with or without a history of oedema, cervical adenopathy or proximity to livestock. They had a sensitivity of 88.9-92.6%, a negative predictive value of up to 98.8% and a positive predictive value in this context of 8.4-8.7%. In terms of sensitivity, these out-performed more complex algorithms identified in other studies, as well as the expert panel. The best-performing algorithm is predicted to identify about 9/10 treatment-seeking HAT cases, though only 1/10 patients referred would test positive.In the absence of regular active screening, improving referrals of HAT patients through other means is essential. Systematic use of syndromic algorithms by peripheral HCWs has the potential to increase case detection and would increase their participation in HAT programmes. The algorithms proposed here, though promising, should be
Full Text Available This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true color images of an eye. An algorithm is proposed for cataract screening based on texture features: uniformity, intensity and standard deviation. These features are first computed and mapped with diagnostic opinion by the eye expert to define the basic threshold of screening system and later tested on real subjects in an eye clinic. Finally, a tele-ophthamology model using our proposed system has been suggested, which confirms the telemedicine application of the proposed system.
Gibson, Sarah; Judy, Jack W; Marković, Dejan
Applications such as brain-machine interfaces require hardware spike sorting in order to 1) obtain single-unit activity and 2) perform data reduction for wireless data transmission. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection, feature-extraction, and dimensionality-reduction algorithms for spike sorting are described and evaluated in terms of accuracy versus complexity. The nonlinear energy operator is chosen as the optimal spike-detection algorithm, being most robust over noise and relatively simple. Discrete derivatives is chosen as the optimal feature-extraction method, maintaining high accuracy across signal-to-noise ratios with a complexity orders of magnitude less than that of traditional methods such as principal-component analysis. We introduce the maximum-difference algorithm, which is shown to be the best dimensionality-reduction method for hardware spike sorting.
Austin J. Cooner
Full Text Available Remote sensing continues to be an invaluable tool in earthquake damage assessments and emergency response. This study evaluates the effectiveness of multilayer feedforward neural networks, radial basis neural networks, and Random Forests in detecting earthquake damage caused by the 2010 Port-au-Prince, Haiti 7.0 moment magnitude (Mw event. Additionally, textural and structural features including entropy, dissimilarity, Laplacian of Gaussian, and rectangular fit are investigated as key variables for high spatial resolution imagery classification. Our findings show that each of the algorithms achieved nearly a 90% kernel density match using the United Nations Operational Satellite Applications Programme (UNITAR/UNOSAT dataset as validation. The multilayer feedforward network was able to achieve an error rate below 40% in detecting damaged buildings. Spatial features of texture and structure were far more important in algorithmic classification than spectral information, highlighting the potential for future implementation of machine learning algorithms which use panchromatic or pansharpened imagery alone.
Full Text Available Computer vision systems have found wide application in foods processing industry to perform quality evaluation. The systems enable to replace human inspectors for the evaluation of a variety of quality attributes. This paper describes the implementation of the Fast Fourier Transform and Kalman filtering algorithms to detect the glutinous rice flour slurry (GRFS gelatinization in an enzymatic „dodol. processing. The onset of the GRFS gelatinization is critical in determining the quality of an enzymatic „dodol.. Combinations of these two algorithms were able to detect the gelatinization of the GRFS. The result shows that the gelatinization of the GRFS was at the time range of 11.75 minutes to 14.75 minutes for 24 batches of processing. This paper will highlight the capability of computer vision using our proposed algorithms in monitoring and controlling of an enzymatic „dodol. processing via image processing technology.
obtained works well for signals present or noise-only RF spectrum data files. The red curve is the threshold when added to the morphological processed...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) US Army Research Laboratory Sensors and Electron Devices...morphological image techniques to the energy detection scenario of signals in the RF spectrum domain. The algorithm automatically establishes a detection
Rajeswaran, Aravind; Narasimhan, Sridharakumar; Narasimhan, Shankar
Leak detection in urban water distribution networks (WDNs) is challenging given their scale, complexity, and limited instrumentation. We present an algorithm for leak detection in WDNs, which involves making additional flow measurements on-demand, and repeated use of water balance. Graph partitioning is used to determine the location of flow measurements, with the objective to minimize the measurement cost. We follow a multi-stage divide and conquer approach. In every stage, a section of the ...
Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko
This paper addresses adjustments, implementation and performance comparison of the Moving Average with Local Difference (MALD) method for ceramic tile surface defects detection. Ceramic tile production process is completely autonomous, except the final stage where human eye is required for defects detection. Recent computational platform development and advances in machine vision provides us with several options for MALD algorithm implementation. In order to exploit the shortest execution tim...
Colilla, Susan; Tov, Elad Yom; Zhang, Ling; Kurzinger, Marie-Laure; Tcherny-Lessenot, Stephanie; Penfornis, Catherine; Jen, Shang; Gonzalez, Danny S; Caubel, Patrick; Welsh, Susan; Juhaeri, Juhaeri
Post-marketing drug surveillance is largely based on signals found in spontaneous reports from patients and healthcare providers. Rare adverse drug reactions and adverse events (AEs) that may develop after long-term exposure to a drug or from drug interactions may be missed. The US FDA and others have proposed that web-based data could be mined as a resource to detect latent signals associated with adverse drug reactions. Recently, a web-based search query method called a query log reaction score (QLRS) was developed to detect whether AEs associated with certain drugs could be found from search engine query data. In this study, we compare the performance of two other algorithms, the proportional query ratio (PQR) and the proportional query rate ratio (Q-PRR) against that of two reference signal-detection algorithms (SDAs) commonly used with the FDA AE Reporting System (FAERS) database. In summary, the web query methods have moderate sensitivity (80%) in detecting signals in web query data compared with reference SDAs in FAERS when the web query data are filtered, but the query metrics generate many false-positives and have low specificity compared with reference SDAs in FAERS. Future research is needed to find better refinements of query data and/or the metrics to improve the specificity of these web query log algorithms.
Gupta, Piyush; Ramirez, Gabriel; Lie, Donald Y. C.; Dallas, Tim; Banister, Ron E.; Dentino, Andrew
Falls by the elderly are highly detrimental to health, frequently resulting in injury, high medical costs, and even death. Using a MEMS-based sensing system, algorithms are being developed for detecting falls and monitoring the gait of elderly and disabled persons. In this study, wireless sensors utilize Zigbee protocols were incorporated into planar shoe insoles and a waist mounted device. The insole contains four sensors to measure pressure applied by the foot. A MEMS based tri-axial accelerometer is embedded in the insert and a second one is utilized by the waist mounted device. The primary fall detection algorithm is derived from the waist accelerometer. The differential acceleration is calculated from samples received in 1.5s time intervals. This differential acceleration provides the quantification via an energy index. From this index one may ascertain different gait and identify fall events. Once a pre-determined index threshold is exceeded, the algorithm will classify an event as a fall or a stumble. The secondary algorithm is derived from frequency analysis techniques. The analysis consists of wavelet transforms conducted on the waist accelerometer data. The insole pressure data is then used to underline discrepancies in the transforms, providing more accurate data for classifying gait and/or detecting falls. The range of the transform amplitude in the fourth iteration of a Daubechies-6 transform was found sufficient to detect and classify fall events.
Alphus D. Wilson
Novel mobile electronic-nose (e-nose) devices and algorithms capable of real-time detection of industrial and municipal pollutants, released from point-sources, recently have been developed by scientists worldwide that are useful for monitoring specific environmental-pollutant levels for enforcement and implementation of effective pollution-abatement programs. E-nose...
Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.
Joyce L Chen
Full Text Available The advent of diffusion magnetic resonance imaging allows researchers to virtually dissect white matter fibre pathways in the brain in vivo. This, for example, allows us to characterize and quantify how fibre tracts differ across populations in health and disease, and change as a function of training. Based on diffusion MRI, prior literature reports the absence of the arcuate fasciculus (AF in some control individuals and as well in those with congenital amusia. The complete absence of such a major anatomical tract is surprising given the subtle impairments that characterize amusia. Thus, we hypothesize that failure to detect the AF in this population may relate to the tracking algorithm used, and is not necessarily reflective of their phenotype. Diffusion data in control and amusic individuals were analyzed using three different tracking algorithms: deterministic and probabilistic, the latter either modeling two or one fibre populations. Across the three algorithms, we replicate prior findings of a left greater than right AF volume, but do not find group differences or an interaction. We detect the AF in all individuals using the probabilistic 2-fibre model, however, tracking failed in some control and amusic individuals when deterministic tractography was applied. These findings show that the ability to detect the AF in our sample is dependent on the type of tractography algorithm. This raises the question of whether failure to detect the AF in prior studies may be unrelated to the underlying anatomy or phenotype.
Chen, Joyce L; Kumar, Sukhbinder; Williamson, Victoria J; Scholz, Jan; Griffiths, Timothy D; Stewart, Lauren
The advent of diffusion magnetic resonance imaging (MRI) allows researchers to virtually dissect white matter fiber pathways in the brain in vivo. This, for example, allows us to characterize and quantify how fiber tracts differ across populations in health and disease, and change as a function of training. Based on diffusion MRI, prior literature reports the absence of the arcuate fasciculus (AF) in some control individuals and as well in those with congenital amusia. The complete absence of such a major anatomical tract is surprising given the subtle impairments that characterize amusia. Thus, we hypothesize that failure to detect the AF in this population may relate to the tracking algorithm used, and is not necessarily reflective of their phenotype. Diffusion data in control and amusic individuals were analyzed using three different tracking algorithms: deterministic and probabilistic, the latter either modeling two or one fiber populations. Across the three algorithms, we replicate prior findings of a left greater than right AF volume, but do not find group differences or an interaction. We detect the AF in all individuals using the probabilistic 2-fiber model, however, tracking failed in some control and amusic individuals when deterministic tractography was applied. These findings show that the ability to detect the AF in our sample is dependent on the type of tractography algorithm. This raises the question of whether failure to detect the AF in prior studies may be unrelated to the underlying anatomy or phenotype.
Full Text Available Abstract We propose in this paper a detection algorithm based on a cost function that jointly tests the correlation induced by the cyclic prefix and the fact that this correlation is time-periodic. In the first part of the paper, the cost function is introduced and some analytical results are given. In particular, the noise and multipath channel impacts on its values are theoretically analysed. In a second part of the paper, some asymptotic results are derived. A first exploitation of these results is used to build a detection test based on the false alarm probability. These results are also used to evaluate the impact of the number of cycle frequencies taken into account in the cost function on the detection performances. Thanks to numerical estimations, we have been able to estimate that the proposed algorithm detects DVB-T signals with an SNR of dB. As a comparison, and in the same context, the detection algorithm proposed by the 802.22 WG in 2006 is able to detect these signals with an SNR of dB.
Full Text Available We propose in this paper a detection algorithm based on a cost function that jointly tests the correlation induced by the cyclic prefix and the fact that this correlation is time-periodic. In the first part of the paper, the cost function is introduced and some analytical results are given. In particular, the noise and multipath channel impacts on its values are theoretically analysed. In a second part of the paper, some asymptotic results are derived. A first exploitation of these results is used to build a detection test based on the false alarm probability. These results are also used to evaluate the impact of the number of cycle frequencies taken into account in the cost function on the detection performances. Thanks to numerical estimations, we have been able to estimate that the proposed algorithm detects DVB-T signals with an SNR of Ã¢ÂˆÂ’12Ã¢Â€Â‰dB. As a comparison, and in the same context, the detection algorithm proposed by the 802.22 WG in 2006 is able to detect these signals with an SNR of Ã¢ÂˆÂ’8Ã¢Â€Â‰dB.
Full Text Available The use of Unmanned Aerial Vehicles (UAVs has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs’ flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.
ZUBAIR AHMED KHAN
Full Text Available MANETs (Mobile Ad Hoc Networks are slowly integrating into our everyday lives, their most prominent uses are visible in the disaster and war struck areas where physical infrastructure is almost impossible or very hard to build. MANETs like other networks are facing the threat of malicious users and their activities. A number of attacks have been identified but the most severe of them is the wormhole attack which has the ability to succeed even in case of encrypted traffic and secure networks. Once wormhole is launched successfully, the severity increases by the fact that attackers can launch other attacks too. This paper presents a comprehensive algorithm for the detection of exposed as well as hidden wormhole attack while keeping the detection rate to maximum and at the same reducing false alarms. The algorithm does not require any extra hardware, time synchronization or any special type of nodes. The architecture consists of the combination of Routing Table, RTT (Round Trip Time and RSSI (Received Signal Strength Indicator for comprehensive detection of wormhole attack. The proposed technique is robust, light weight, has low resource requirements and provides real-time detection against the wormhole attack. Simulation results show that the algorithm is able to provide a higher detection rate, packet delivery ratio, negligible false alarms and is also better in terms of Ease of Implementation, Detection Accuracy/ Speed and processing overhead.
Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto
The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.
Qutub, Mohammed; Govindan, Parasanth; Vattappillil, Anupama
Abstract Background Clostridium difficile-associated diarrhea (CDAD) is the commonest cause of nosocomial diarrhea. Methods for C.difficle detection include toxins or enzyme detection by immunoassays, cytotoxicity neutralization assay (CCNA) or FDA approved PCR. Due to the tedious and time consuming nature of the CCNA and the suboptimal specificity and sensitivity of EIAs, these assays cannot be used as stand-alone tests. One approach of combining these assays, is by two-step algorithm, where Ag-EIAs is used as screening test and confirmation of positives either by a toxin detection enzyme immunoassays or by CCNA. Another approach is a three-step algorithm, where Ag-EIAs is used as screening test, and all positives are tested by a toxin detection EIA and if toxin detection negative, further tested either by PCR or by CCNA. Therefore we aimed to evaluate a new two-step algorithm for the detection of toxigenic CD and its role in improvement of turn-around-time. Methods A total of 3518 nonformed stool specimens from suspected cases of CDAD were collected. Specimens were tested either by GDH-toxin A/B ICA; or by GeneXpert C. diificile PCR as per the algorithm (Figure 1). Results Of 3518 stool specimens tested; 130 (3.70%) were positive and 2989 (84.96%) were negative by GDH-toxin A/B ICA while 399 (11.34%) required PCR. None of the negative GDH and positive toxin A/B samples tested positive by PCR. Also, none of the negative GDH and negative toxin A/B samples tested positive by PCR (Figure 2). Conclusion Study indicates that when the GDH-toxin A/B ICA is used, almost 89 % of the results could be reported within 30 minutes; about 3.7 % of them being positive results and 84.96 % being negative. Confirmation of the discrepant GDH and Toxin A/B results was by PCR. The new algorithm offered rapid detection of C.difficile by ICA, judicious use of PCR and effectively reduced turnaround time. Figure-1: Two-step algorithm for C difficile testing. Figure-2: Results of two
Cremers, Charlotte H P; Dankbaar, Jan Willem; Vergouwen, Mervyn D I; Vos, Pieter C; Bennink, Edwin; Rinkel, Gabriel J E; Velthuis, Birgitta K; van der Schaaf, Irene C
Tracer delay-sensitive perfusion algorithms in CT perfusion (CTP) result in an overestimation of the extent of ischemia in thromboembolic stroke. In diagnosing delayed cerebral ischemia (DCI) after aneurysmal subarachnoid hemorrhage (aSAH), delayed arrival of contrast due to vasospasm may also overestimate the extent of ischemia. We investigated the diagnostic accuracy of tracer delay-sensitive and tracer delay-insensitive algorithms for detecting DCI. From a prospectively collected series of aSAH patients admitted between 2007-2011, we included patients with any clinical deterioration other than rebleeding within 21 days after SAH who underwent NCCT/CTP/CTA imaging. Causes of clinical deterioration were categorized into DCI and no DCI. CTP maps were calculated with tracer delay-sensitive and tracer delay-insensitive algorithms and were visually assessed for the presence of perfusion deficits by two independent observers with different levels of experience. The diagnostic value of both algorithms was calculated for both observers. Seventy-one patients were included. For the experienced observer, the positive predictive values (PPVs) were 0.67 for the delay-sensitive and 0.66 for the delay-insensitive algorithm, and the negative predictive values (NPVs) were 0.73 and 0.74. For the less experienced observer, PPVs were 0.60 for both algorithms, and NPVs were 0.66 for the delay-sensitive and 0.63 for the delay-insensitive algorithm. Test characteristics are comparable for tracer delay-sensitive and tracer delay-insensitive algorithms for the visual assessment of CTP in diagnosing DCI. This indicates that both algorithms can be used for this purpose.
Mitani, Akihisa; Hakamata, Yukichika; Hosoi, Megumi; Horie, Masafumi; Murano, Yoko; Saito, Akira; Yanagimoto, Shintaro; Tsuji, Shoji; Yamamoto, Kazuhiko; Nagase, Takahide
Patients with primary spontaneous pneumothorax (PSP) usually complain of sudden-onset dyspnea and pleuritic chest pain. However, asymptomatic PSP has been incidentally detected on chest X-rays. In this study, we analyzed the incidence, characteristics, risk factors, and prognosis of asymptomatic PSP detected during regular medical check-ups in university students. In this study, 101,709 chest X-rays were performed during medical check-ups for students at the University of Tokyo between April 2011 and March 2016. Among them, 43 cases of asymptomatic PSP (0.042%) were detected. We calculated the lung collapse rate of pneumothorax using Kircher's method. We also analyzed risk factors associated with asymptomatic PSP using characteristics inspected in medical check-ups. The incidence of asymptomatic PSP was significantly higher in men than in women (0.050% vs 0.018%). Multivariate analysis revealed an association of younger age, greater height, lower body mass index, and greater height growth per year with an increased risk of asymptomatic PSP in male students. Mild lung collapse (up is very important because a considerable number of patients with mild lung collapse eventually require an invasive medical procedure.
Full Text Available Abstract Background Drinking water contamination, with the capability to affect large populations, poses a significant risk to public health. In recent water contamination events, the impact of contamination on public health appeared in data streams monitoring health-seeking behavior. While public health surveillance has traditionally focused on the detection of pathogens, developing methods for detection of illness from fast-acting chemicals has not been an emphasis. Methods An automated surveillance system was implemented for Cincinnati's drinking water contamination warning system to monitor health-related 911 calls in the city of Cincinnati. Incident codes indicative of possible water contamination were filtered from all 911 calls for analysis. The 911 surveillance system uses a space-time scan statistic to detect potential water contamination incidents. The frequency and characteristics of the 911 alarms over a 2.5 year period were studied. Results During the evaluation, 85 alarms occurred, although most occurred prior to the implementation of an additional alerting constraint in May 2009. Data were available for analysis approximately 48 minutes after calls indicating alarms may be generated 1-2 hours after a rapid increase in call volume. Most alerts occurred in areas of high population density. The average alarm area was 9.22 square kilometers. The average number of cases in an alarm was nine calls. Conclusions The 911 surveillance system provides timely notification of possible public health events, but did have limitations. While the alarms contained incident codes and location of the caller, additional information such as medical status was not available to assist validating the cause of the alarm. Furthermore, users indicated that a better understanding of 911 system functionality is necessary to understand how it would behave in an actual water contamination event.
Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María
One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works.
Full Text Available Positive obstacles will cause damage to field robotics during traveling in field. Field autonomous land vehicle is a typical field robotic. This article presents a feature matching and fusion-based algorithm to detect obstacles using LiDARs for field autonomous land vehicles. There are three main contributions: (1 A novel setup method of compact LiDAR is introduced. This method improved the LiDAR data density and reduced the blind region of the LiDAR sensor. (2 A mathematical model is deduced under this new setup method. The ideal scan line is generated by using the deduced mathematical model. (3 Based on the proposed mathematical model, a feature matching and fusion (FMAF-based algorithm is presented in this article, which is employed to detect obstacles. Experimental results show that the performance of the proposed algorithm is robust and stable, and the computing time is reduced by an order of two magnitudes by comparing with other exited algorithms. This algorithm has been perfectly applied to our autonomous land vehicle, which has won the champion in the challenge of Chinese “Overcome Danger 2014” ground unmanned vehicle.
Wang, Xingmei; Hao, Wenqian; Li, Qiming
This paper proposes an adaptive cultural algorithm with improved quantum-behaved particle swarm optimization (ACA-IQPSO) to detect the underwater sonar image. In the population space, to improve searching ability of particles, iterative times and the fitness value of particles are regarded as factors to adaptively adjust the contraction-expansion coefficient of the quantum-behaved particle swarm optimization algorithm (QPSO). The improved quantum-behaved particle swarm optimization algorithm (IQPSO) can make particles adjust their behaviours according to their quality. In the belief space, a new update strategy is adopted to update cultural individuals according to the idea of the update strategy in shuffled frog leaping algorithm (SFLA). Moreover, to enhance the utilization of information in the population space and belief space, accept function and influence function are redesigned in the new communication protocol. The experimental results show that ACA-IQPSO can obtain good clustering centres according to the grey distribution information of underwater sonar images, and accurately complete underwater objects detection. Compared with other algorithms, the proposed ACA-IQPSO has good effectiveness, excellent adaptability, a powerful searching ability and high convergence efficiency. Meanwhile, the experimental results of the benchmark functions can further demonstrate that the proposed ACA-IQPSO has better searching ability, convergence efficiency and stability.
Full Text Available A node replication attack against a wireless sensor network involves surreptitious efforts by an adversary to insert duplicate sensor nodes into the network while avoiding detection. Due to the lack of tamper-resistant hardware and the low cost of sensor nodes, launching replication attacks takes little effort to carry out. Naturally, detecting these replica nodes is a very important task and has been studied extensively. In this paper, we propose a novel distributed, randomized sensor duplicate detection algorithm called Discard to detect node replicas in group-deployed wireless sensor networks. Our protocol is an epidemic, self-organizing duplicate detection scheme, which exhibits emergent properties. Epidemic schemes have found diverse applications in distributed computing: load balancing, topology management, audio and video streaming, computing aggregate functions, failure detection, network and resource monitoring, to name a few. To the best of our knowledge, our algorithm is the first attempt at exploring the potential of this paradigm to detect replicas in a wireless sensor network. Through analysis and simulation, we show that our scheme achieves robust replica detection with substantially lower communication, computational and storage requirements than prior schemes in the literature.
Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres
Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382
Full Text Available In order to solve the problem of detecting a BOC signal, which uses a long-period pseudo random sequence, an algorithm is presented based on quadrature channel correlation. The quadrature channel correlation method eliminates the autocorrelation component of the carrier wave, allowing for the extraction of the absolute autocorrelation peaks of the BOC sequence. If the same lag difference and height difference exist for the adjacent peaks, the BOC signal can be detected effectively using a statistical analysis of the multiple autocorrelation peaks. The simulation results show that the interference of the carrier wave component is eliminated and the autocorrelation peaks of the BOC sequence are obtained effectively without demodulation. The BOC signal can be detected effectively when the SNR is greater than −12 dB. The detection ability can be improved further by increasing the number of sampling points. The higher the ratio of the square wave subcarrier speed to the pseudo random sequence speed is, the greater the detection ability is with a lower SNR. The algorithm presented in this paper is superior to the algorithm based on the spectral correlation.
Full Text Available The interval temporal logic (ITL model checking (MC technique enhances the power of intrusion detection systems (IDSs to detect concurrent attacks due to the strong expressive power of ITL. However, an ITL formula suffers from difficulty in the description of the time constraints between different actions in the same attack. To address this problem, we formalize a novel real-time interval temporal logic—real-time attack signature logic (RASL. Based on such a new logic, we put forward a RASL model checking algorithm. Furthermore, we use RASL formulas to describe attack signatures and employ discrete timed automata to create an audit log. As a result, RASL model checking algorithm can be used to automatically verify whether the automata satisfy the formulas, that is, whether the audit log coincides with the attack signatures. The simulation experiments show that the new approach effectively enhances the detection power of the MC-based intrusion detection methods for a number of telnet attacks, p-trace attacks, and the other sixteen types of attacks. And these experiments indicate that the new algorithm can find several types of real-time attacks, whereas the existing MC-based intrusion detection approaches cannot do that.
Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong
Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.
Full Text Available In recent years, massive MIMO systems have been widely researched to realize high-speed data transmission. Since massive MIMO systems use a large number of antennas, these systems require huge complexity to detect the signal. In this paper, we propose a novel detection method for massive MIMO using parallel detection with maximum likelihood detection with QR decomposition and M-algorithm (QRM-MLD to reduce the complexity and latency. The proposed scheme obtains an R matrix after permutation of an H matrix and QR decomposition. The R matrix is also eliminated using a Gauss–Jordan elimination method. By using a modified R matrix, the proposed method can detect the transmitted signal using parallel detection. From the simulation results, the proposed scheme can achieve a reduced complexity and latency with a little degradation of the bit error rate (BER performance compared with the conventional method.
Full Text Available Lane departure warning system (LDWS has been regarded as an efficient method to lessen the damages of road traffic accident resulting from driver fatigue or inattention. Lane detection is one of the key techniques for LDWS. To overcome the contradiction between complexity of algorithm and the real-time requirement for vehicle onboard system, this paper introduces a new lane detection method based on intelligent CCD parameters regulation. In order to improve the real-time capability of the system, a CCD parameters regulating method is proposed which enhances the contrast between lane line and road surfaces and reduces image noise, so it lays a good foundation for the following lane detection. Hough transform algorithm is improved by selection and classification of seed points. Finally the lane line is extracted through some restrictions. Experimental results verify the effectiveness of the proposed method, which improves not only real-time capability but also the accuracy of the system.
Zhou, Xu; Zhao, Xiaohui; Liu, Yanheng; Sun, Geng
Community detection can be used as an important technique for product and personalized service recommendation. A game theory based approach to detect overlapping community structure is introduced in this paper. The process of the community formation is converted into a game, when all agents (nodes) cannot improve their own utility, the game process will be terminated. The utility function is composed of a gain and a loss function and we present a new gain function in this paper. In addition, different from choosing action randomly among join, quit and switch for each agent to get new label, two new strategies for each agent to update its label are designed during the game, and the strategies are also evaluated and compared for each agent in order to find its best result. The overlapping community structure is naturally presented when the stop criterion is satisfied. The experimental results demonstrate that the proposed algorithm outperforms other similar algorithms for detecting overlapping communities in networks.
Full Text Available Optical diagnostic techniques are commonly used to observe the breakup of dense sprays. In order to extract quantitative data from such images, edge detection algorithms have commonly been used. However, correlation image velocimetry techniques are now also becoming available for such applications. An empirical comparison between these two techniques is demonstrated for the high-speed velocimetry of the breakup of an annular air-assisted spray. A threshold based sub-pixel interpolating edge detection algorithm is employed. Both real and synthetic images are used to determine the sensitivity of the error in these techniques to changes in both image noise and defocus, the two leading causes of information loss. It is demonstrated that correlation image velocimetry techniques are generally superior in precision and accuracy as compared to edge detection techniques for the application of spray velocimetry within a reasonable parameter space of noise and defocus.
Kataoka, Shun; Kobayashi, Takuto; Yasuda, Muneki; Tanaka, Kazuyuki
We propose a new algorithm to detect the community structure in a network that utilizes both the network structure and vertex attribute data. Suppose we have the network structure together with the vertex attribute data, that is, the information assigned to each vertex associated with the community to which it belongs. The problem addressed this paper is the detection of the community structure from the information of both the network structure and the vertex attribute data. Our approach is based on the Bayesian approach that models the posterior probability distribution of the community labels. The detection of the community structure in our method is achieved by using belief propagation and an EM algorithm. We numerically verified the performance of our method using computer-generated networks and real-world networks.
Full Text Available In order to achieve the perfectly blind detection of robustness watermarking algorithm, a novel self-embedding robust digital watermarking algorithm with blind detection is proposed in this paper. Firstly the original image is divided to not overlap image blocks and then decomposable coefficients are obtained by lifting-based wavelet transform in every image blocks. Secondly the low-frequency coefficients of block images are selected and then approximately represented as a product of a base matrix and a coefficient matrix using NMF. Then the feature vector represent original image is obtained by quantizing coefficient matrix, and finally the adaptive quantization of the robustness watermark is embedded in the low-frequency coefficients of LWT. Experimental results show that the scheme is robust against common signal processing attacks, meanwhile perfect blind detection is achieve.
Full Text Available Wireless Sensor Networks (WSN adopt GHz as their communication carrier, and it has been found that GHz carrier attenuation model in transmission path is associated with vegetation water content. In this paper, based on RSSI mechanism of WSN nodes we formed vegetation dehydration sensors. Through relationships between vegetation water content and carrier attenuation, we perceived forest vegetation water content variations and early fire gestation process, and established algorithms of early forest fires detection. Experiments confirm that wireless sensor networks can accurately perceive vegetation dehydration events and forest fire events. Simulation results show that, WSN dehydration perception channel (P2P representing 75 % amounts of carrier channel or more, it can meet the detection requirements, which presented a new algorithm of early forest fire detection.
Salem, A. A.
V-bending is widely used to produce the sheet metal components. There are global Changes in the shape of the sheet metal component during progressive bending processes. Accordingly, collisions may be occurred between part and tool during bending. Collision-free is considered one of the feasibility conditions of V-bending process planning which the tool selection is verified by the absence of the collisions. This paper proposes an intelligent collision detection algorithm which has the ability to distinguish between 2D bent parts and the other bent parts. Due to this ability, 2D and 3D collision detection subroutines have been developed in the proposed algorithm. This division of algorithm’s subroutines could reduce the computational operations during collisions detecting.
Qaiyum, M.; Tyrrell, P.N.M.; McCall, I.W.; Cassar-Pullicino, V.N.
Objective. Multilevel spinal injury is well recognised. Previous studies reviewing the radiographs of spinal injury patients have shown an incidence of 15.2% of unsuspected spinal injury. It is recognised that magnetic resonance imaging (MRI) can identify injuries that are not demonstrated on radiographs. The objective of this study was to determine the incidence and significance of spinal injuries using MRI in comparison with radiographs.Design and patients. The radiographs and MR images of 110 acute spinal injury patients were reviewed independently of each other and the findings were then correlated to determine any unsuspected injury.Results. MRI detected vertebral body bone bruises (microtrabecular bone injury) in 41.8% of spinal injury patients which were not seen on radiographs. These bone bruises were best appreciated on sagittal short tau inversion recovery MR sequences and seen at contiguous and non-contiguous levels in relation to the primary injury.Conclusion. This level of incidence of bone bruises has not previously been appreciated. We recommend that patients undergoing MRI for an injured segment of the spine are better assessed by MRI of the entire spine at the same time to exclude further injury. (orig.)
L. S. Sindhuja
Full Text Available Security of Mobile Wireless Sensor Networks is a vital challenge as the sensor nodes are deployed in unattended environment and they are prone to various attacks. One among them is the node replication attack. In this, the physically insecure nodes are acquired by the adversary to clone them by having the same identity of the captured node, and the adversary deploys an unpredictable number of replicas throughout the network. Hence replica node detection is an important challenge in Mobile Wireless Sensor Networks. Various replica node detection techniques have been proposed to detect these replica nodes. These methods incur control overheads and the detection accuracy is low when the replica is selected as a witness node. This paper proposes to solve these issues by enhancing the Single Hop Detection (SHD method using the Clonal Selection algorithm to detect the clones by selecting the appropriate witness nodes. The advantages of the proposed method include (i increase in the detection ratio, (ii decrease in the control overhead, and (iii increase in throughput. The performance of the proposed work is measured using detection ratio, false detection ratio, packet delivery ratio, average delay, control overheads, and throughput. The implementation is done using ns-2 to exhibit the actuality of the proposed work.
Full Text Available Synthetic Aperture Radar (SAR tomography can reconstruct the elevation profile of each pixel based on a set of co-registered complex images of a scene. Its main advantage over classical interferometric methods consists in the capability to improve the detection of single persistent scatterers as well as to enable the detection of multiple scatterers interfering within the same pixel. In this paper, three tomographic algorithms are compared and applied to a dataset of 32 images to generate the elevation map of dominant scatterers from a scene. Targets which present stable proprieties over time - Persistent Scatterers (PS are then detected based on reflectivity functions reconstructed with Capon filtering.
Full Text Available Zero velocity update (ZUPT plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance.
Amudha, P; Karthik, S; Sivakumari, S
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.
Full Text Available Abnormal Cardiac beat identification is a key process in the detection of heart ailments. This work proposes a technique for the detection of Bundle Branch Block (BBB using hybrid Firefly and Particle Swarm Optimization (FFPSO technique in combination with Levenberg Marquardt Neural Network (LMNN classifier. BBB is developed when there is a block along the electrical impulses travel to make heart to beat. ECG feature extraction is a key process in detecting heart ailments. Our present study comes up with a hybrid method combining the two meta-heuristic optimization methods, Firefly algorithm (FFA and Particle Swarm Optimization (PSO, for feature optimization of ECG (BBB and normal patterns. One of the major controlling forces is the light intensity attraction of FFA algorithm that models the optimum solution. The light intensity attraction process of the FFA algorithm depends on random directions for search, this may delay in achieving the global optimization solution. The hybrid technique FFPSO, integrates the concepts from FF algorithm and PSO and creates new individuals. In the FFPSO method the local search is performed through the modified light intensity attraction step with PSO operator. The FFPSO features are compared with the classical FF, PSO features. The FFPSO feature values are given as the input to the Levenberg Marquardt Neural Network (LM NN classifier. It has been observed that the performance of the classifier is improved with the help of the optimized features.
Full Text Available Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC with Enhanced Particle Swarm Optimization (EPSO to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup’99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.
Hortos, William S.
In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts
Jakobsen, Michael Linde; Pedersen, Finn; Hanson, Steen Grüner
factor is directly related to the thermal expansion and refractive-index coefficients of the optics (> 10(-5) K-1 for glass). By cascade-coupling an array of sensors, the ensemble-averaged angular velocity is measured in "real-time". This will reduce the influence of pseudo-vibrations arising from......This paper presents a zero-crossing detection algorithm for arrays of compact low-cost optical sensors based on spatial filtering for measuring fluctuations in angular velocity of rotating solid structures. The algorithm is applicable for signals with moderate signal-to-noise ratios, and delivers...
Radius, Andrea; Azevedo, Rui; Sapage, Tania; Carmo, Paulo
During the last years, the increasing pollution occurrence and the alarming deterioration of the environmental health conditions of the sea, lead to the need of global monitoring capabilities, namely for marine environment management in terms of oil spill detection and indication of the suspected polluter. The sensitivity of Synthetic Aperture Radar (SAR) to the different phenomena on the sea, especially for oil spill and vessel detection, makes it a key instrument for global pollution monitoring. The SAR performances in maritime pollution monitoring are being operationally explored by a set of service providers on behalf of the European Maritime Safety Agency (EMSA), which has launched in 2007 the CleanSeaNet (CSN) project - a pan-European satellite based oil monitoring service. EDISOFT, which is from the beginning a service provider for CSN, is continuously investing in R&D activities that will ultimately lead to better algorithms and better performance on oil spill detection from SAR imagery. This strategy is being pursued through EDISOFT participation in the FP7 EC Sea-U project and in the Automatic Oil Spill Detection (AOSD) ESA project. The Sea-U project has the aim to improve the current state of oil spill detection algorithms, through the informative content maximization obtained with data fusion, the exploitation of different type of data/ sensors and the development of advanced image processing, segmentation and classification techniques. The AOSD project is closely related to the operational segment, because it is focused on the automation of the oil spill detection processing chain, integrating auxiliary data, like wind information, together with image and geometry analysis techniques. The synergy between these different objectives (R&D versus operational) allowed EDISOFT to develop oil spill detection software, that combines the operational automatic aspect, obtained through dedicated integration of the processing chain in the existing open source NEST
Merk, D.; Zinner, T.
In this paper a new detection scheme for convective initiation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting CI with geostationary satellite data. It uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared (IR) criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one high-resolut...
Benmoussat, M. S.; Guillaume, M.; Caulier, Y.; Spinnler, K.
A fully-automatic approach based on the use of induction thermography and detection algorithms is proposed to inspect industrial metallic parts containing different surface and sub-surface anomalies such as open cracks, open and closed notches with different sizes and depths. A practical experimental setup is developed, where lock-in and pulsed thermography (LT and PT, respectively) techniques are used to establish a dataset of thermal images for three different mockups. Data cubes are constructed by stacking up the temporal sequence of thermogram images. After the reduction of the data space dimension by means of denoising and dimensionality reduction methods; anomaly detection algorithms are applied on the reduced data cubes. The dimensions of the reduced data spaces are automatically calculated with arbitrary criterion. The results show that, when reduced data cubes are used, the anomaly detection algorithms originally developed for hyperspectral data, the well-known Reed and Xiaoli Yu detector (RX) and the regularized adaptive RX (RARX), give good detection performances for both surface and sub-surface defects in a non-supervised way.
Recent papers introduced the Non-Harmonic Fourier Analysis for bladed wheels damage detection. This technique showed its potential in estimating the frequency of sinusoidal signals even when the acquisition time is short with respect to the vibration period, provided that some hypothesis are fulfilled. Anyway, previously proposed algorithms showed severe limitations in cracks detection at their early stage. The present paper proposes an improved algorithm which allows to detect a blade vibration frequency shift due to a crack whose size is really small compared to the blade width. Such a technique could be implemented for condition-based maintenance, allowing to use non-contact methods for vibration measurements. A stator-fixed laser sensor could monitor all the blades as they pass in front of the spot, giving precious information about the wheel health. This configuration determines an acquisition time for each blade which become shorter as the machine rotational speed increases. In this situation, traditional Discrete Fourier Transform analysis results in poor frequency resolution, being not suitable for small frequency shift detection. Non-Harmonic Fourier Analysis instead showed high reliability in vibration frequency estimation even with data samples collected in a short time range. A description of the improved algorithm is provided in the paper, along with a comparison with the previous one. Finally, a validation of the method is presented, based on finite element simulations results.
O'Donnell, Erin M.; Messinger, David W.; Salvaggio, Carl; Schott, John R.
The ability to detect and identify effluent gases is, and will continue to be, of great importance. This would not only aid in the regulation of pollutants but also in treaty enforcement and monitoring the production of weapons. Considering these applications, finding a way to remotely investigate a gaseous emission is highly desirable. This research utilizes hyperspectral imagery in the infrared region of the electromagnetic spectrum to evaluate an invariant method of detecting and identifying gases within a scene. The image is evaluated on a pixel-by-pixel basis and is studied at the subpixel level. A library of target gas spectra is generated using a simple slab radiance model. This results in a more robust description of gas spectra which are representative of real-world observations. This library is the subspace utilized by the detection and identification algorithms. The subspace will be evaluated for the set of basis vectors that best span the subspace. The Lee algorithm will be used to determine the set of basis vectors, which implements the Maximum Distance Method (MaxD). A Generalized Likelihood Ratio Test (GLRT) determines whether or not the pixel contains the target. The target can be either a single species or a combination of gases. Synthetically generated scenes will be used for this research. This work evaluates whether the Lee invariant algorithm will be effective in the gas detection and identification problem.
Full Text Available Shellcodes are machine language codes injected into target programs in the form of network packets or malformed files. Shellcodes can trigger buffer overflow vulnerability and execute malicious instructions. Signature matching technology used by antivirus software or intrusion detection system has low detection rate for unknown or polymorphic shellcodes; to solve such problem, an immune-inspired shellcode detection algorithm was proposed, named ISDA. Static analysis and dynamic analysis were both applied. The shellcodes were disassembled to assembly instructions during static analysis and, for dynamic analysis, the API function sequences of shellcodes were obtained by simulation execution to get the behavioral features of polymorphic shellcodes. The extracted features of shellcodes were encoded to antigens based on n-gram model. Immature detectors become mature after immune tolerance based on negative selection algorithm. To improve nonself space coverage rate, the immune detectors were encoded to hyperellipsoids. To generate better antibody offspring, the detectors were optimized through clonal selection algorithm with genetic mutation. Finally, shellcode samples were collected and tested, and result shows that the proposed method has higher detection accuracy for both nonencoded and polymorphic shellcodes.
Full Text Available Due to the unmanned aerial vehicle remote sensing images (UAVRSI within rich texture details of ground objects and obvious phenomenon, the same objects with different spectra, it is difficult to effectively acquire the edge information using traditional edge detection operator. To solve this problem, an edge detection method of UAVRSI by combining Zernike moments with clustering algorithms is proposed in this study. To begin with, two typical clustering algorithms, namely, fuzzy c-means (FCM and K-means algorithms, are used to cluster the original remote sensing images so as to form homogeneous regions in ground objects. Then, Zernike moments are applied to carry out edge detection on the remote sensing images clustered. Finally, visual comparison and sensitivity methods are adopted to evaluate the accuracy of the edge information detected. Afterwards, two groups of experimental data are selected to verify the proposed method. Results show that the proposed method effectively improves the accuracy of edge information extracted from remote sensing images.
Full Text Available For the traditional orthogonal matching pursuit(OMP algorithm in multi-user detection(MUD for uplink grant-free NOMA, here is a poor BER performance, so in this paper we propose an temporal-correlation orthogonal matching pursuit algorithm(TOMP to realize muli-user detection. The core idea of the TOMP is to use the time correlation of the active user sets to achieve user activity and data detection in a number of continuous time slots. We use the estimated active user set in the current time slot as a priori information to estimate the active user sets for the next slot. By maintaining the active user set Tˆl of size K(K is the number of users, but modified in each iteration. Specifically, active user set is believed to be reliable in one iteration but shown error in another iteration, can be added to the set path delay Tˆl or removed from it. Theoretical analysis of the improved algorithm provide a guarantee that the multi-user can be successfully detected with a high probability. The simulation results show that the proposed scheme can achieve better bit error rate (BER performance in the uplink grant-free NOMA system.
Panin, S. V.; Chemezov, V. O.; Lyubutin, P. S.; Titkov, V. V.
An algorithm of fatigue crack detection in optical images taken in fatigue tests of materials is proposed and tested. The algorithm is designed for automation of measurements of the crack propagation parameter and tracing the crack tip position in the course of cyclic loading for the purpose of shifting the optical system with respect to the examined sample surface to the "region of interest." It is found that the coordinates of the image fragment containing the crack can be determined with a mean error of 1.93% of the total size of the raster. Testing of the algorithm on model images shows that the mean error of determining the crack tip position is smaller than 56 pixels.
Full Text Available A multisensor scheduling algorithm based on the hybrid task decomposition and modified binary particle swarm optimization (MBPSO is proposed. Firstly, aiming at the complex relationship between sensor resources and tasks, a hybrid task decomposition method is presented, and the resource scheduling problem is decomposed into subtasks; then the sensor resource scheduling problem is changed into the match problem of sensors and subtasks. Secondly, the resource match optimization model based on the sensor resources and tasks is established, which considers several factors, such as the target priority, detecting benefit, handover times, and resource load. Finally, MBPSO algorithm is proposed to solve the match optimization model effectively, which is based on the improved updating means of particle’s velocity and position through the doubt factor and modified Sigmoid function. The experimental results show that the proposed algorithm is better in terms of convergence velocity, searching capability, solution accuracy, and efficiency.
Full Text Available In this paper we describe a novel approach for the extraction of object features from a digital image captured in an industrial environment. The developed algorithm localizes the projected position of the symmetry axis of cylindrical objects. Conventional approaches to this task are often based on e.g. edge detection or image matching to determine object features. These methods tend to be time consuming and are not suited for online applications. The proposed algorithm utilizes sufficient statistics to reduce the amount of data which needs to be evaluated. In addition it determines the symmetry axis not from local but from global image features. This strategy significantly reduces the computation load, while purveying the desired information. Finally, we provide an application example and discuss some properties of the presented algorithm.
Full Text Available We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm.
Aguiar, Derek; Halldórsson, Bjarni V.; Morrow, Eric M.; Istrail, Sorin
Motivation: The understanding of the genetic determinants of complex disease is undergoing a paradigm shift. Genetic heterogeneity of rare mutations with deleterious effects is more commonly being viewed as a major component of disease. Autism is an excellent example where research is active in identifying matches between the phenotypic and genomic heterogeneities. A considerable portion of autism appears to be correlated with copy number variation, which is not directly probed by single nucleotide polymorphism (SNP) array or sequencing technologies. Identifying the genetic heterogeneity of small deletions remains a major unresolved computational problem partly due to the inability of algorithms to detect them. Results: In this article, we present an algorithmic framework, which we term DELISHUS, that implements three exact algorithms for inferring regions of hemizygosity containing genomic deletions of all sizes and frequencies in SNP genotype data. We implement an efficient backtracking algorithm—that processes a 1 billion entry genome-wide association study SNP matrix in a few minutes—to compute all inherited deletions in a dataset. We further extend our model to give an efficient algorithm for detecting de novo deletions. Finally, given a set of called deletions, we also give a polynomial time algorithm for computing the critical regions of recurrent deletions. DELISHUS achieves significantly lower false-positive rates and higher power than previously published algorithms partly because it considers all individuals in the sample simultaneously. DELISHUS may be applied to SNP array or sequencing data to identify the deletion spectrum for family-based association studies. Availability: DELISHUS is available at http://www.brown.edu/Research/Istrail_Lab/. Contact: Eric_Morrow@brown.edu and Sorin_Istrail@brown.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22689755
Full Text Available Abstract Background T-wave alternans (TWA provides a noninvasive and clinically useful marker for the risk of sudden cardiac death (SCD. Current most widely used TWA detection algorithms work in two different domains: time and frequency. The disadvantage of the spectral analytical techniques is that they treat the alternans signal as a stationary wave with a constant amplitude and a phase. They cannot detect non-stationary characteristics of the signal. The temporal domain methods are sensitive to the alignment of the T-waves. In this study, we sought to develop a robust combined algorithm (CA to assess T-wave alternans, which can qualitatively detect and quantitatively measure TWA in time domain. Methods The T wave sequences were extracted and the total energy of each T wave within the specified time-frequency region was calculated. The rank-sum test was applied to the ranked energy sequences of T waves to detect TWA qualitatively. The ECG containing TWA was quantitatively analyzed with correlation method. Results Simulation test result proved a mean sensitivity of 91.2% in detecting TWA, and for the SNR not less than 30 dB, the accuracy rate of detection achieved 100%. The clinical data experiment showed that the results from this method vs. spectral method had the correlation coefficients of 0.96. Conclusions A novel TWA analysis algorithm utilizing the wavelet transform and correlation technique is presented in this paper. TWAs are not only correctly detected qualitatively in frequency domain by energy value of T waves, but the alternans frequency and amplitude in temporal domain are measured quantitatively.
Treiber, O [Institute of Biomathematics and Biometry, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); Wanninger, F [Institute of Radiation Protection, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); Fuehr, H [Institute of Biomathematics and Biometry, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); Panzer, W [Institute of Radiation Protection, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); Regulla, D [Institute of Radiation Protection, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); Winkler, G [Institute of Biomathematics and Biometry, GSF - National Research Center for Environment and Health, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany)
This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.
Full Text Available Sleep spindles are brief bursts of brain activity in the sigma frequency range (11–16 Hz measured by electroencephalography (EEG mostly during non-rapid eye movement (NREM stage 2 sleep. These oscillations are of great biological and clinical interests because they potentially play an important role in identifying and characterizing the processes of various neurological disorders. Conventionally, sleep spindles are identified by expert sleep clinicians via visual inspection of EEG signals. The process is laborious and the results are inconsistent among different experts. To resolve the problem, numerous computerized methods have been developed to automate the process of sleep spindle identification. Still, the performance of these automated sleep spindle detection methods varies inconsistently from study to study. There are two reasons: (1 the lack of common benchmark databases, and (2 the lack of commonly accepted evaluation metrics. In this study, we focus on tackling the second problem by proposing to evaluate the performance of a spindle detector in a multi-objective optimization context and hypothesize that using the resultant Pareto fronts for deriving evaluation metrics will improve automatic sleep spindle detection. We use a popular multi-objective evolutionary algorithm (MOEA, the Strength Pareto Evolutionary Algorithm (SPEA2, to optimize six existing frequency-based sleep spindle detection algorithms. They include three Fourier, one continuous wavelet transform (CWT, and two Hilbert-Huang transform (HHT based algorithms. We also explore three hybrid approaches. Trained and tested on open-access DREAMS and MASS databases, two new hybrid methods of combining Fourier with HHT algorithms show significant performance improvement with F1-scores of 0.726–0.737.
Gaur, Pallavi; Chaturvedi, Anoop
The clustering pattern and motifs give immense information about any biological data. An application of machine learning algorithms for clustering and candidate motif detection in miRNAs derived from exosomes is depicted in this paper. Recent progress in the field of exosome research and more particularly regarding exosomal miRNAs has led much bioinformatic-based research to come into existence. The information on clustering pattern and candidate motifs in miRNAs of exosomal origin would help in analyzing existing, as well as newly discovered miRNAs within exosomes. Along with obtaining clustering pattern and candidate motifs in exosomal miRNAs, this work also elaborates the usefulness of the machine learning algorithms that can be efficiently used and executed on various programming languages/platforms. Data were clustered and sequence candidate motifs were detected successfully. The results were compared and validated with some available web tools such as 'BLASTN' and 'MEME suite'. The machine learning algorithms for aforementioned objectives were applied successfully. This work elaborated utility of machine learning algorithms and language platforms to achieve the tasks of clustering and candidate motif detection in exosomal miRNAs. With the information on mentioned objectives, deeper insight would be gained for analyses of newly discovered miRNAs in exosomes which are considered to be circulating biomarkers. In addition, the execution of machine learning algorithms on various language platforms gives more flexibility to users to try multiple iterations according to their requirements. This approach can be applied to other biological data-mining tasks as well.
Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun
Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.
Full Text Available BackgroundOur group earlier developed a small monitoring device, which uses accelerometer measurements to accurately detect motor fluctuations in patients with Parkinson’s (On and Off state based on an algorithm that characterizes gait through the frequency content of strides. To further validate the algorithm, we studied the correlation of its outputs with the motor section of the Unified Parkinson’s Disease Rating Scale part-III (UPDRS-III.MethodSeventy-five patients suffering from Parkinson’s disease were asked to walk both in the Off and the On state while wearing the inertial sensor on the waist. Additionally, all patients were administered the motor section of the UPDRS in both motor phases. Tests were conducted at the patient’s home. Convergence between the algorithm and the scale was evaluated by using the Spearman’s correlation coefficient.ResultsCorrelation with the UPDRS-III was moderate (rho −0.56; p < 0.001. Correlation between the algorithm outputs and the gait item in the UPDRS-III was good (rho −0.73; p < 0.001. The factorial analysis of the UPDRS-III has repeatedly shown that several of its items can be clustered under the so-called Factor 1: “axial function, balance, and gait.” The correlation between the algorithm outputs and this factor of the UPDRS-III was −0.67 (p < 0.01.ConclusionThe correlation achieved by the algorithm with the UPDRS-III scale suggests that this algorithm might be a useful tool for monitoring patients with Parkinson’s disease and motor fluctuations.
extrema CPA Closest point of approach CPU Central processing unit DDR Double data rate DoG Difference of Gaussian DPM Deformable parts model FAST...computational learning PCI Peripheral component interconnect PD Probability of detection RADAR Radio detection and ranging RAM Random access memory...memory was 16 GB of 1,600 MHz DDR3 RAM . The operating system was 64-bit and was located with the swap space on a SATA III SSD via a SATA III
Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad
The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.
Corley, Meredith; Solem, Amanda; Qu, Kun; Chang, Howard Y; Laederach, Alain
Ribonucleic acid (RNA) secondary structure prediction continues to be a significant challenge, in particular when attempting to model sequences with less rigidly defined structures, such as messenger and non-coding RNAs. Crucial to interpreting RNA structures as they pertain to individual phenotypes is the ability to detect RNAs with large structural disparities caused by a single nucleotide variant (SNV) or riboSNitches. A recently published human genome-wide parallel analysis of RNA structure (PARS) study identified a large number of riboSNitches as well as non-riboSNitches, providing an unprecedented set of RNA sequences against which to benchmark structure prediction algorithms. Here we evaluate 11 different RNA folding algorithms' riboSNitch prediction performance on these data. We find that recent algorithms designed specifically to predict the effects of SNVs on RNA structure, in particular remuRNA, RNAsnp and SNPfold, perform best on the most rigorously validated subsets of the benchmark data. In addition, our benchmark indicates that general structure prediction algorithms (e.g. RNAfold and RNAstructure) have overall better performance if base pairing probabilities are considered rather than minimum free energy calculations. Although overall aggregate algorithmic performance on the full set of riboSNitches is relatively low, significant improvement is possible if the highest confidence predictions are evaluated independently. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Full Text Available Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.
Akram, B.A.; Zafar, A.; Akbar, A.H.; Chaudhry, A.
The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule’s Coefficient) and JC (Jaccard’s Coefficient), execution time and memory consumption. Experimental results showed that Kapur’s algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes. (author)
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806
Konstantinos N. Topouzelis
Full Text Available This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.
Topouzelis, Konstantinos N
This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR) for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.
Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.
Xu, Lili; Luo, Shuqian
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
Sharif-Khodaei, Z; Aliabadi, M H
Piezoelectric sensors are increasingly being used in active structural health monitoring, due to their durability, light weight and low power consumption. In the present work damage detection and characterization methodologies based on Lamb waves have been evaluated for aircraft panels. The applicability of various proposed delay-and-sum algorithms on isotropic and composite stiffened panels have been investigated, both numerically and experimentally. A numerical model for ultrasonic wave propagation in composite laminates is proposed and compared to signals recorded from experiments. A modified delay-and-sum algorithm is then proposed for detecting impact damage in composite plates with and without a stiffener which is shown to capture and localize damage with only four transducers. (papers)
Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin
Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.
Libonati, Renata; DaCamara, Carlos; Setzer, Alberto; Morelli, Fabiano; Melchiori, Arturo
The Brazilian Cerrado is significantly affected by anthropic fires every year, which makes the region an important source of pyrogenic emissions. This study aims at generating improved 1 km monthly burned area maps for Cerrado based on remote-sensed information. The algorithm relies on a burn-sensitive vegetation index based on MODIS daily values of near and middle infrared reflectance and makes use of active fire detection from multiple sensors. Validation is performed using reference burned...
Full Text Available In the present study, we applied the Particle Swarming Optimization (PSO procedure to parametrize two Local Maxima (LM algorithms and a pattern recognition model based on raster and point-cloud datasets in order to extract treetops of coniferous forests from high resolution LiDAR-data of different forest structures (monoplane, biplane and multi-layer in the Alps region. The approach based on the pattern recognition model uses the geomorphon algorithm applied to the DSM to detect the treetops. The geomorphon model gave good results in terms of matching rates (Rmat: 0.8 with intermediate values of commission and omission rates (Rcom: 0.22, Rom: 0.2. Therefore, it could be a valid alternative to the LM-algorithms when only raster products (DSM – CHM are available. The geomorphon pattern recognition model has been proved to be a powerful method in order to properly detect treetops of coniferous stands with complex forest structures. This model allows to obtain high detection rates and estimation accuracy of forest volume, also in comparison to the most recent available literature data. The models are developed in Java under Free and Open Source license and are integrated in the JGrassTools library, which is now available as SpatialToolbox of the GIS gvSIG.
Khan, Naveed; McClean, Sally; Zhang, Shuai; Nugent, Chris
In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient's vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA) algorithm by using a genetic algorithm (GA) to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%.
Ranjeeth Kumar Sundararajan
Full Text Available In wireless sensor network (WSN, the sensors are deployed and placed uniformly to transmit the sensed data to a centralized station periodically. So, the major threat of the WSN network layer is sinkhole attack and it is still being a challenging issue on the sensor networks, where the malicious node attracts the packets from the other normal sensor nodes and drops the packets. Thus, this paper proposes an Intrusion Detection System (IDS mechanism to detect the intruder in the network which uses Low Energy Adaptive Clustering Hierarchy (LEACH protocol for its routing operation. In the proposed algorithm, the detection metrics, such as number of packets transmitted and received, are used to compute the intrusion ratio (IR by the IDS agent. The computed numeric or nonnumeric value represents the normal or malicious activity. As and when the sinkhole attack is captured, the IDS agent alerts the network to stop the data transmission. Thus, it can be a resilient to the vulnerable attack of sinkhole. Above all, the simulation result is shown for the proposed algorithm which is proven to be efficient compared with the existing work, namely, MS-LEACH, in terms of minimum computational complexity and low energy consumption. Moreover, the algorithm was numerically analyzed using TETCOS NETSIM.
Full Text Available According to the World Health Organization, one of the criteria for the standardized assessment of case causality in adverse drug reactions is the temporal relationship between the intake of a drug and the occurrence of a reaction or a laboratory test abnormality. This article presents and describes an algorithm for the detection of a reasonable temporal correlation between the administration of a drug and the alteration of a laboratory value course. The algorithm is designed to process normalized lab values and is therefore universally applicable. It has a sensitivity of 0.932 for the detection of lab value courses that show changes in temporal correlation with the administration of a drug and it has a specificity of 0.967 for the detection of lab value courses that show no changes. Therefore, the algorithm is appropriate to screen the data of electronic health records and to support human experts in revealing adverse drug reactions. A reference implementation in Python programming language is available.
Abu Jbara, Khaled F.
This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.
Full Text Available A Fiber Bragg Grating (FBG interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA and advanced RISC machine (ARM platform, tunable Fabry–Perot (F–P filter and optical switch. To improve system resolution, the F–P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.
Song, Jae-Gu; Kim, Jong Hyun; Seo, Dongil; Soh, Wooyoung; Kim, Seoksoo
Botnet-based cyber attacks cause large-scale damage with increasingly intelligent tools, which has called for varied research on bot detection. In this study, we developed a method of monitoring behaviors of host-based processes from the point that a bot header attempts to make zombie PCs, detecting cyber attack precursor symptoms. We designed an algorithm that figures out characteristics of botnet which attempts to launch malicious behaviors by means of signature registration, which is for process/reputation/network traffic/packet/source analysis and a white list, as a measure to respond to bots from the end point.
This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr
Kang, Myeongsu; Kim, Jaeyoung; Choi, Byeong-Keun; Kim, Jong-Myon
This paper proposes a fault detection methodology for bearings using envelope analysis with a genetic algorithm (GA)-based adaptive filter bank. Although a bandpass filter cooperates with envelope analysis for early identification of bearing defects, no general consensus has been reached as to which passband is optimal. This study explores the impact of various passbands specified by the GA in terms of a residual frequency components-to-defect frequency components ratio, which evaluates the degree of defectiveness in bearings and finally outputs an optimal passband for reliable bearing fault detection.
Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan
False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.
Full Text Available Due to the fast growth and tradition of the internet over the last decades, the network security problems are increasing vigorously. Humans can not handle the speed of processes and the huge amount of data required to handle network anomalies. Therefore, it needs substantial automation in both speed and accuracy. Intrusion Detection System is one of the approaches to recognize illegal access and rare attacks to secure networks. In this proposed paper, Naive Bayes, J48 and Random Forest classifiers are compared to compute the detection rate and accuracy of IDS. For experiments, the KDD_NSL dataset is used.
The purpose of this study was to develop a novel measurement method using a machine vision system. Besides using image processing techniques, the proposed system employs a detection line algorithm that detects the tool electrode length and drilling depth of a workpiece accurately and effectively. Different boundaries of areas on the tool electrode are defined: a baseline between base and normal areas, a ND-line between normal and drilling areas (accumulating carbon area), and a DD-line between drilling area and dielectric fluid droplet on the electrode tip. Accordingly, image processing techniques are employed to extract a tool electrode image, and the centroid, eigenvector, and principle axis of the tool electrode are determined. The developed detection line algorithm (DLA) is then used to detect the baseline, ND-line, and DD-line along the direction of the principle axis. Finally, the tool electrode length and drilling depth of the workpiece are estimated via detected baseline, ND-line, and DD-line. Experimental results show good accuracy and efficiency in estimation of the tool electrode length and drilling depth under different conditions. Hence, this research may provide a reference for industrial application in EDM drilling measurement.
Full Text Available The development of intrusion detection systems (IDS that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC and deep neural network (DNN algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN, support vector machine (SVM, random forest (RF and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.
S. Smole Možina
Full Text Available Campylobacter jejuni and Campylobacter. coli are the leading cause of bacterial food-borne enteric infection with still increasing incidence in the most developed countries. Consuming and/or handling poultry meat is the most consistent risk factor, linked to the high prevalence of campylobacters in retail poultry meat. Recent data about the incidence of human campylobacteriosis and prevalence of C.jejuni and C. coli in poultry meat are presented. Important aspects of Campylobacter transmission along the food chain are discussed – physiological specificities possibly enabling adaptation and survival in the food production environment as well as the emerging resistance to antimicrobial agents used in veterinary and human medicine. Recent advances in detection and identification methods of Campylobacter spp. are mentioned as a basis for preventive strategies to bring these food-borne pat hogens under control. Recent risk assessments show that mitigation strategies could be applied at different points from food-animals production to the finalconsumptionoffoods.Educating the consumers is important,sincecritical control point remains the hygiene in the final food preparation.
Jung, Sungmo; Kim, Jong Hyun; Cagalaban, Giovanni; Lim, Ji-Hoon; Kim, Seoksoo
More recently, botnet-based cyber attacks, including a spam mail or a DDos attack, have sharply increased, which poses a fatal threat to Internet services. At present, antivirus businesses make it top priority to detect malicious code in the shortest time possible (Lv.2), based on the graph showing a relation between spread of malicious code and time, which allows them to detect after malicious code occurs. Despite early detection, however, it is not possible to prevent malicious code from occurring. Thus, we have developed an algorithm that can detect precursor symptoms at Lv.1 to prevent a cyber attack using an evasion method of 'an executing environment aware attack' by analyzing system behaviors and monitoring memory.
Tian, Fuyang; Cao, Dong; Dong, Xiaoning; Zhao, Xinqiang; Li, Fade; Wang, Zhonghua
Behavioral features recognition was an important effect to detect oestrus and sickness in dairy herds and there is a need for heat detection aid. The detection method was based on the measure of the individual behavioural activity, standing time, and temperature of dairy using vibrational sensor and temperature sensor in this paper. The data of behavioural activity index, standing time, lying time and walking time were sent to computer by lower power consumption wireless communication system. The fast approximate K-means algorithm (FAKM) was proposed to deal the data of the sensor for behavioral features recognition. As a result of technical progress in monitoring cows using computers, automatic oestrus detection has become possible.
Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran
As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability.
Full Text Available As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA, which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability.
Full Text Available Falls are dangerous for the elderly population; therefore many fall detection systems have been developed. However, previous methods are bulky for elderly people or only use a single sensor to isolate falls from daily living activities, which makes a fall difficult to distinguish. In this paper, we present a cost-effective and easy-to-use portable fall-detection sensor and algorithm. Specifically, to detect human falls, we used a three-axis accelerator and a three-axis gyroscope in a mobile phone. We used the Fourier descriptor-based frequency analysis method to classify both normal and falling status. From the experimental results, the proposed method detects falling status with 96.14% accuracy.
Quintanilla-Domínguez, Joel; Ojeda-Magaña, Benjamín; Marcano-Cedeño, Alexis; Cortina-Januchs, María G.; Vega-Corona, Antonio; Andina, Diego
A new method for detecting microcalcifications in regions of interest (ROIs) extracted from digitized mammograms is proposed. The top-hat transform is a technique based on mathematical morphology operations and, in this paper, is used to perform contrast enhancement of the mi-crocalcifications. To improve microcalcification detection, a novel image sub-segmentation approach based on the possibilistic fuzzy c-means algorithm is used. From the original ROIs, window-based features, such as the mean and standard deviation, were extracted; these features were used as an input vector in a classifier. The classifier is based on an artificial neural network to identify patterns belonging to microcalcifications and healthy tissue. Our results show that the proposed method is a good alternative for automatically detecting microcalcifications, because this stage is an important part of early breast cancer detection.
Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng
Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.
Full Text Available Topology potential theory is a new community detection theory on complex network, which divides a network into communities by spreading outward from each local maximum potential node. At present, almost all topology-potential-based community detection methods ignore node difference and assume that all nodes have the same mass. This hypothesis leads to inaccuracy of topology potential calculation and then decreases the precision of community detection. Inspired by the idea of PageRank algorithm, this paper puts forward a novel mass calculation method for complex network nodes. A node’s mass obtained by our method can effectively reflect its importance and influence in complex network. The more important the node is, the bigger its mass is. Simulation experiment results showed that, after taking node mass into consideration, the topology potential of node is more accurate, the distribution of topology potential is more reasonable, and the results of community detection are more precise.
Full Text Available Method of cycle-slip detection based on Geometry-free observation combinations has insensitive cycle-slip. This paper analyzes the principle of cycle-slip detection based on the geometric relationship. Then study the similarities and differences of more than one geometry free phase combinations separately. And study the effect of adding a MW(Melbourne Wübbena combination. We proposed to select GF(Geometry Free combinations by cross-sectional area. Finally BeiDou triple-frequency data have been used to validate the conclusion. We conclude that two geometry-free phase combination is the most reasonable choice for the detection of insensitive cycle-slip. And a MW combination can obviously decrease the amounts of insensitive cycle-slip. The optimized algorithm only has 1 insensitive cycle slip, and all detected cycle-slip repaired successfully.
Furstenberg, Robert; Kendziora, Christopher A.; Papantonakis, Michael R.; Nguyen, Viet; Byers, Jeff; McGill, R. Andrew
We are developing a technology for stand-off detection based on photo-thermal infrared imaging spectroscopy (PT-IRIS). In this approach, one or more infrared (IR) quantum cascade lasers are tuned to strong absorption bands in the analytes and directed at the sample while an IR focal plane array is used to image the subsequent thermal emissions. In this paper we present recent advances in the theory and numerical modeling of photo-thermal imaging and spectroscopy of particulates on flat substrates. We compare the theoretical models with experimental data taken on our mobile cart-based PT-IRIS system. Synthetic data of the photo-thermal response was calculated for a wide range of analytes, substrates, particle sizes, and analyte mass loadings using their known thermo-physical and optical properties. These synthetic data sets can now be generated quickly and were used to accelerate the development of detection algorithms. The performance of detection algorithms will also be discussed.
Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.
Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.
Cocron, Peter; Bachl, Veronika; Früh, Laura; Koch, Iris; Krems, Josef F
The low noise emission of battery electric vehicles (BEVs) has led to discussions about how to address potential safety issues for other road users. Legislative actions have already been undertaken to implement artificial sounds. In previous research, BEV drivers reported that due to low noise emission they paid particular attention to pedestrians and bicyclists. For the current research, we developed a hazard detection task to test whether drivers with BEV experience respond faster to incidents, which arise due to the low noise emission, than inexperienced drivers. The first study (N=65) revealed that BEV experience only played a minor role in drivers' response to hazards resulting from low BEV noise. The tendency to respond, reaction times and hazard evaluations were similar among experienced and inexperienced BEV drivers; only small trends in the assumed direction were observed. Still, both groups clearly differentiated between critical and non-critical scenarios and responded accordingly. In the second study (N=58), we investigated additionally if sensitization to low noise emission of BEVs had an effect on hazard perception in incidents where the noise difference is crucial. Again, participants in all groups differentiated between critical and non-critical scenarios. Even though trends in response rates and latencies occurred, experience and sensitization to low noise seemed to only play a minor role in detecting hazards due to low BEV noise. An additional global evaluation of BEV noise further suggests that even after a short test drive, the lack of noise is perceived more as a comfort feature than a safety threat. Copyright © 2014 Elsevier Ltd. All rights reserved.
Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting
Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a
Akram, Vahid Khalilpour; Dagdeviren, Orhan
Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930
Full Text Available In engineering problems due to physical and cost constraints, the best results, obtained by a global optimization algorithm, cannot be realized always. Under such conditions, if multiple solutions (local and global are known, the implementation can be quickly switched to another solution without much interrupting the design process. This paper presents a new swarm multimodal optimization algorithm named as the collective animal behavior (CAB. Animal groups, such as schools of fish, flocks of birds, swarms of locusts, and herds of wildebeest, exhibit a variety of behaviors including swarming about a food source, milling around a central location, or migrating over large distances in aligned groups. These collective behaviors are often advantageous to groups, allowing them to increase their harvesting efficiency to follow better migration routes, to improve their aerodynamic, and to avoid predation. In the proposed algorithm, searcher agents emulate a group of animals which interact with each other based on simple biological laws that are modeled as evolutionary operators. Numerical experiments are conducted to compare the proposed method with the state-of-the-art methods on benchmark functions. The proposed algorithm has been also applied to the engineering problem of multi-circle detection, achieving satisfactory results.
Matthews, Bryan L.; Srivastava, Ashok N.
Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.
Roberts, Nicola D; Kortschak, R Daniel; Parker, Wendy T; Schreiber, Andreas W; Branford, Susan; Scott, Hamish S; Glonek, Garique; Adelson, David L
With the advent of relatively affordable high-throughput technologies, DNA sequencing of cancers is now common practice in cancer research projects and will be increasingly used in clinical practice to inform diagnosis and treatment. Somatic (cancer-only) single nucleotide variants (SNVs) are the simplest class of mutation, yet their identification in DNA sequencing data is confounded by germline polymorphisms, tumour heterogeneity and sequencing and analysis errors. Four recently published algorithms for the detection of somatic SNV sites in matched cancer-normal sequencing datasets are VarScan, SomaticSniper, JointSNVMix and Strelka. In this analysis, we apply these four SNV calling algorithms to cancer-normal Illumina exome sequencing of a chronic myeloid leukaemia (CML) patient. The candidate SNV sites returned by each algorithm are filtered to remove likely false positives, then characterized and compared to investigate the strengths and weaknesses of each SNV calling algorithm. Comparing the candidate SNV sets returned by VarScan, SomaticSniper, JointSNVMix2 and Strelka revealed substantial differences with respect to the number and character of sites returned; the somatic probability scores assigned to the same sites; their susceptibility to various sources of noise; and their sensitivities to low-allelic-fraction candidates. Data accession number SRA081939, code at http://code.google.com/p/snv-caller-review/ firstname.lastname@example.org Supplementary data are available at Bioinformatics online.
Ravari, Alireza Norouzzadeh; Taghirad, Hamid D
In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods.
Carvajal-Godinez, Johan; Guo, Jian; Gill, Eberhard
Failure detection, isolation, and recovery is an essential requirement of any space mission design. Several spacecraft components, especially sensors, are prone to performance deviation due to intrinsic physical effects. For that reason, innovative approaches for the treatment of faults in onboard sensors are necessary. This work introduces the concept of agent-based fault detection and recovery for sensors used in satellite attitude determination and control. Its focuses on the implementation of an algorithm for addressing linear drift bias in gyroscopes. The algorithm was implemented using an agent-based architecture that can be integrated into the satellite's onboard software. Numerical simulations were carried out to show the effectiveness of this scheme in satellite's operations. The proposed algorithm showed a reduction of up to 50% in the stabilization time for the detumbling maneuver, and also an improvement in the pointing accuracy of up to 20% when it was applied in precise payload pointing procedures. The relevance of this contribution is its added value for optimizing the launch and early operation of small satellite missions, as well as, an enabler for innovative satellite functions, for instance, optical downlink communication.
Kora, Padmavathi; Sri Rama Krishna, K.
Atrial fibrillation (AF) is a type of heart abnormality, during the AF electrical discharges in the atrium are rapid, results in abnormal heart beat. The morphology of ECG changes due to the abnormalities in the heart. This paper consists of three major steps for the detection of heart diseases: signal pre-processing, feature extraction and classification. Feature extraction is the key process in detecting the heart abnormality. Most of the ECG detection systems depend on the time domain features for cardiac signal classification. In this paper we proposed a wavelet coherence (WTC) technique for ECG signal analysis. The WTC calculates the similarity between two waveforms in frequency domain. Parameters extracted from WTC function is used as the features of the ECG signal. These features are optimized using Bat algorithm. The Levenberg Marquardt neural network classifier is used to classify the optimized features. The performance of the classifier can be improved with the optimized features.
Yugandhara Prabhakar, Shinde; Parganiha, Pratishtha; Madhu Viswanatham, V.; Nirmala, M.
In Cyber Security world the botnet attacks are increasing. To detect botnet is a challenging task. Botnet is a group of computers connected in a coordinated fashion to do malicious activities. Many techniques have been developed and used to detect and prevent botnet traffic and the attacks. In this paper, a comparative study is done on Genetic Algorithm (GA) and Self Organizing Map (SOM) to detect the botnet network traffic. Both are soft computing techniques and used in this paper as data analytics system. GA is based on natural evolution process and SOM is an Artificial Neural Network type, uses unsupervised learning techniques. SOM uses neurons and classifies the data according to the neurons. Sample of KDD99 dataset is used as input to GA and SOM.
Full Text Available This paper addresses the theoretical and practical aspects of an important problem in computer chess programming - the problem of draw detection in cases of position repetition. The standard approach used in the majority of computer chess programs is hash-oriented. This method is sufficient in most cases, as the Zobrist keys are already present due to the systemic positional hashing, so that they need not be computed anew for the purpose of draw detection. The new type of the algorithm that we have developed solves the problem of draw detection in cases when Zobrist keys are not used in the program, i.e. in cases when the memory is not hashed.
Tilaro, Filippo; Gonzalez-Berges, Manuel; Roshchin, Mikhail; Varela, Fernando
The CERN automation infrastructure consists of over 600 heterogeneous industrial control systems with around 45 million deployed sensors, actuators and control objects. Therefore, it is evident that the monitoring of such huge system represents a challenging and complex task. This paper describes three different mathematical approaches that have been designed and developed to detect anomalies in any of the CERN control systems. Specifically, one of these algorithms is purely based on expert knowledge; the other two mine the historical generated data to create a simple model of the system; this model is then used to detect faulty sensors measurements. The presented methods can be categorized as dynamic unsupervised anomaly detection; “dynamic” since the behaviour of the system and the evolution of its attributes are observed and changing in time. They are “unsupervised” because we are trying to predict faulty events without examples in the data history. So, the described strategies involve monitoring t...
Liu, Yecai; Posey, Drew L; Cetron, Martin S; Painter, John A
Before 2007, immigrants and refugees bound for the United States were screened for tuberculosis (TB) by a smear-based algorithm that could not diagnose smear-negative/culture-positive TB. In 2007, the Centers for Disease Control and Prevention implemented a culture-based algorithm. To evaluate the effect of the culture-based algorithm on preventing the importation of TB to the United States by immigrants and refugees from foreign countries. Population-based, cross-sectional study. Panel physician sites for overseas medical examination. Immigrants and refugees with TB. Comparison of the increase of smear-negative/culture-positive TB cases diagnosed overseas among immigrants and refugees by the culture-based algorithm with the decline of reported cases among foreign-born persons within 1 year after arrival in the United States from 2007 to 2012. Of the 3 212 421 arrivals of immigrants and refugees from 2007 to 2012, a total of 1 650 961 (51.4%) were screened by the smear-based algorithm and 1 561 460 (48.6%) were screened by the culture-based algorithm. Among the 4032 TB cases diagnosed by the culture-based algorithm, 2195 (54.4%) were smear-negative/culture-positive. Before implementation (2002 to 2006), the annual number of reported cases among foreign-born persons within 1 year after arrival was relatively constant (range, 1424 to 1626 cases; mean, 1504 cases) but decreased from 1511 to 940 cases during implementation (2007 to 2012). During the same period, the annual number of smear-negative/culture-positive TB cases diagnosed overseas among immigrants and refugees bound for the United States by the culture-based algorithm increased from 4 to 629. This analysis did not control for the decline in new arrivals of nonimmigrant visitors to the United States and the decrease of incidence of TB in their countries of origin. Implementation of the culture-based algorithm may have substantially reduced the incidence of TB among newly arrived, foreign-born persons in
Iskandar, Afif Akbar
Indonesia is one of the biggest user of social media, this can be useful for detecting a popular trend in an era, including some disease outbreak that we get from topic extracting method, e.g.: NMF. However, the texts that were spread was an unstructured text which needs to be cleaned before being processed. One of the cleaning methods of texts is using regular expression. However, data texts from social media have a lot of variations, which means that regular expression that were being made have to adapt each data that will be cleaned, hence, we need an algorithm to "learn" the form of texts that need to be cleaned. In this paper, we purposed a framework for cleaning and extracting topic from Twitter data called RED-NMF, feature extraction and filtering method based on regular expression discovery algorithm for data cleaning and non-negative matrix factorization for extract the topic.
Liu, Fei; Zhang, Xi; Jia, Yan
In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.
Bouganssa, Issam; Sbihi, Mohamed; Zaim, Mounia
The 2D Discrete Wavelet Transform (DWT) is a computationally intensive task that is usually implemented on specific architectures in many imaging systems in real time. In this paper, a high throughput edge or contour detection algorithm is proposed based on the discrete wavelet transform. A technique for applying the filters on the three directions (Horizontal, Vertical and Diagonal) of the image is used to present the maximum of the existing contours. The proposed architectures were designed in VHDL and mapped to a Xilinx Sparten6 FPGA. The results of the synthesis show that the proposed architecture has a low area cost and can operate up to 100 MHz, which can perform 2D wavelet analysis for a sequence of images while maintaining the flexibility of the system to support an adaptive algorithm.
Castillo, D.; Samaniego, René; Jiménez, Y.; Cuenca, L.; Vivanco, O.; Rodríguez-Álvarez, M. J.
This work presents the advance to development of an algorithm for automatic detection of demyelinating lesions and cerebral ischemia through magnetic resonance images, which have contributed in paramount importance in the diagnosis of brain diseases. The sequences of images to be used are T1, T2, and FLAIR. Brain demyelination lesions occur due to damage of the myelin layer of nerve fibers; and therefore this deterioration is the cause of serious pathologies such as multiple sclerosis (MS), leukodystrophy, disseminated acute encephalomyelitis. Cerebral or cerebrovascular ischemia is the interruption of the blood supply to the brain, thus interrupting; the flow of oxygen and nutrients needed to maintain the functioning of brain cells. The algorithm allows the differentiation between these lesions.
Serrano, Carmen; Acha, Begona; Revuelto, Sergio
The diabetic retinopathy is a common disease among diabetic patients that can cause blindness. The number of microaneurysms in an eye fundus indicates the evolution stage of the illness. In this paper, an algorithm to automatically detect microaneurysms in retinal angiograms is proposed. The method has three main steps: preprocessing step, seed detection and a subsequent region-growing algorithm. The preprocessing step consists of a Gaussian high pass filtering followed by a top-hat filtering. The aim of this preprocessing step is to eliminate the vascular tree while enhancing microaneurysms. In the second step, a 2-D adaptive filtering is performed and those pixels where the prediction error is high are considered seeds. After the region growing, only regions that fit certain validation criteria are considered microaneurysms. These criteria are intensity, contrast and shape criteria. Intensity and contrast ones are typical criteria used in region-growing algorithms. To create the shape criterion, we have used the fact that microaneurysms can be modelled as 2D Gaussian functions. During the application of this criterion we pass each grown region through a bank of nine correlators, a 2D Gaussian function and eight linear segments oriented in eight different directions. Then we compare the outputs of this bank and we impose that a region can be a microaneurysm when the maximum peak of correlation is obtained when passing through the Gaussian correlator. In this study we have tested the algorithm with 11 images containing 711 microaneurysms in all and we have obtained a sensitivity of 90,72% for a predictive positive value of 82,35% .
Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas
In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average
The incorporation of small, privately owned generation operating in parallel with, and supplying power to, the distribution network is becoming more widespread. This method of operation does however have problems associated with it. In particular, a loss of the connection to the main utility supply which leaves a portion of the utility load connected to the embedded generator will result in a power island. This situation presents possible dangers to utility personnel and the public, complications for smooth system operation and probable plant damage should the two systems be reconnected out-of-synchronism. Loss of Grid (or Islanding), as this situation is known, is the subject of this thesis. The work begins by detailing the requirements for operation of generation embedded in the utility supply with particular attention drawn to the requirements for a loss of grid protection scheme. The mathematical basis for a new loss of grid protection algorithm is developed and the inclusion of the algorithm in an integrated generator protection scheme described. A detailed description is given on the implementation of the new algorithm in a microprocessor based relay hardware to allow practical tests on small embedded generation facilities, including an in-house multiple generator test facility. The results obtained from the practical tests are compared with those obtained from simulation studies carried out in previous work and the differences are discussed. The performance of the algorithm is enhanced from the theoretical algorithm developed using the simulation results with simple filtering together with pattern recognition techniques. This provides stability during severe load fluctuations under parallel operation and system fault conditions and improved performance under normal operating conditions and for loss of grid detection. In addition to operating for a loss of grid connection, the algorithm will respond to load fluctuations which occur within a power island
A. L. Salih
Full Text Available The analysis of the impact crater size-frequency distribution (CSFD is a well-established approach to the determination of the age of planetary surfaces. Classically, estimation of the CSFD is achieved by manual crater counting and size determination in spacecraft images, which, however, becomes very time-consuming for large surface areas and/or high image resolution. With increasing availability of high-resolution (nearly global image mosaics of planetary surfaces, a variety of automated methods for the detection of craters based on image data and/or topographic data have been developed. In this contribution a template-based crater detection algorithm is used which analyses image data acquired under known illumination conditions. Its results are used to establish the CSFD for the examined area, which is then used to estimate the absolute model age of the surface. The detection threshold of the automatic crater detection algorithm is calibrated based on a region with available manually determined CSFD such that the age inferred from the manual crater counts corresponds to the age inferred from the automatic crater detection results. With this detection threshold, the automatic crater detection algorithm can be applied to a much larger surface region around the calibration area. The proposed age estimation method is demonstrated for a Kaguya Terrain Camera image mosaic of 7.4 m per pixel resolution of the floor region of the lunar crater Tsiolkovsky, which consists of dark and flat mare basalt and has an area of nearly 10,000 km2. The region used for calibration, for which manual crater counts are available, has an area of 100 km2. In order to obtain a spatially resolved age map, CSFDs and surface ages are computed for overlapping quadratic regions of about 4.4 x 4.4 km2 size offset by a step width of 74 m. Our constructed surface age map of the floor of Tsiolkovsky shows age values of typically 3.2-3.3 Ga, while for small regions lower (down to
Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.
Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
Solar power ramp events (SPREs) are those that significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.
Hoshino, Eri; Hayashi, Kuniyoshi; Suzuki, Mitsuyoshi; Obatake, Masayuki; Urayama, Kevin Y; Nakano, Satoshi; Taura, Yasuyuki; Nio, Masaki; Takahashi, Osamu
The stool color card has been the primary tool for identifying acholic stools in infants with biliary atresia (BA), in several countries. However, BA stools are not always acholic, as obliteration of the bile duct occurs gradually. This study aims to introduce Baby Poop (Baby unchi in Japanese), a free iPhone application, employing a detection algorithm to capture subtle differences in colors, even with non-acholic BA stools. The application is designed for use by caregivers of infants aged approximately 2 weeks-1 month. Baseline analysis to determine optimal color parameters predicting BA stools was performed using logistic regression (n = 50). Pattern recognition and machine learning processes were performed using 30 BA and 34 non-BA images. Additional 5 BA and 35 non-BA pictures were used to test accuracy. Hue, saturation, and value (HSV) were the preferred parameter for BA stool identification. A sensitivity and specificity were 100% (95% confidence interval 0.48-1.00 and 0.90-1.00, respectively) even among a collection of visually non-acholic, i.e., pigmented BA stools and relatively pale-colored non-BA stools. Results suggest that an iPhone mobile application integrated with a detection algorithm is an effective and convenient modality for early detection of BA, and potentially for other related diseases.
Full Text Available The short-time Fourier transform- (STFT- based algorithm was suggested to distinguish falls from various activities of daily living (ADLs. Forty male subjects volunteered in the experiments including three types of falls and four types of ADLs. An inertia sensor unit attached to the middle of two anterior superior iliac spines was used to measure the 3-axis accelerations at 100 Hz. The measured accelerations were transformed to signal vector magnitude values to be analyzed using STFT. The powers of low frequency components were extracted, and the fall detection was defined as whether the normalized power was less than the threshold (50% of the normal power. Most power was observed at the frequency band lower than 5 Hz in all activities, but the dramatic changes in the power were found only in falls. The specificity of 1–3 Hz frequency components was the best (100%, but the sensitivity was much smaller compared with 4 Hz component. The 4 Hz component showed the best fall detection with 96.9% sensitivity and 97.1% specificity. We believe that the suggested algorithm based on STFT would be useful in the fall detection and the classification from ADLs as well.
Kang, Xiaomin; Huang, Baoqi; Qi, Guodong
Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.
Schack, Tim; Safi Harb, Yosef; Muma, Michael; Zoubir, Abdelhak M
Atrial fibrillation (AF) is one of the major causes of stroke, heart failure, sudden death, and cardiovascular morbidity and the most common type of arrhythmia. Its diagnosis and the initiation of treatment, however, currently requires electrocardiogram (ECG)-based heart rhythm monitoring. The photoplethysmogram (PPG) offers an alternative method, which is convenient in terms of its recording and allows for self-monitoring, thus relieving clinical staff and enabling early AF diagnosis. We introduce a PPG-based AF detection algorithm using smartphones that has a low computational cost and low memory requirements. In particular, we propose a modified PPG signal acquisition, explore new statistical discriminating features and propose simple classification equations by using sequential forward selection (SFS) and support vector machines (SVM). The algorithm is applied to clinical data and evaluated in terms of receiver operating characteristic (ROC) curve and statistical measures. The combination of Shannon entropy and the median of the peak rise height achieves perfect detection of AF on the recorded data, highlighting the potential of PPG for reliable AF detection.
Chen, Yudong; Zhou, Wang
Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.
Full Text Available In this paper, a novel damage detection algorithm is developed based on blind source separation in conjunction with time-series analysis. Blind source separation (BSS, is a powerful signal processing tool that is used to identify the modal responses and mode shapes of a vibrating structure using only the knowledge of responses. In the proposed method, BSS is first employed to estimate the modal response using the vibration measurements. Time-series analysis is then performed to characterize the mono-component modal responses and successively the resulting time-series models are utilized for one-step ahead prediction of the modal response. With the occurrence of newer measurements containing the signature of damaged system, a variance-based damage index is used to identify the damage instant. Once the damage instant is identified, the damaged and undamaged modal parameters of the system are estimated in an adaptive fashion. The proposed method solves classical damage detection issues including the identification of damage instant, location as well as the severity of damage. The proposed damage detection algorithm is verified using extensive numerical simulations followed by the full scale study of UCLA Factor building using the measured responses under Parkfield earthquake.
The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.
Full Text Available Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D angular velocities of a smartphone through FFT (fast Fourier transform and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.
Groth, M.; Henes, F.O.; Mayer, U.; Regier, M.; Adam, G.; Begemann, P.G.C.
Objective: To compare the incidence of pulmonary embolism (PE) and additional pathologic findings (APF) detected by computed tomography pulmonary angiography (CTPA) according to different age-groups. Materials and methods: 1353 consecutive CTPA cases for suspected PE were retrospectively reviewed. Patients were divided into seven age groups: ≤29, 30–39, 40–49, 50–59, 60–69, 70–79 and ≥80 years. Differences between the groups were tested using Fisher's exact or chi-square test. A p-value 0.0024). Conclusion: The incidences of PE and APF detected by CTPA reveal no significant differences between various age groups.
Full Text Available Trichodesmium, a major colonial cyanobacterial nitrogen fixer, forms large blooms in NO3-depleted tropical oceans and enhances CO2 sequestration by the ocean due to its ability to fix dissolved dinitrogen. Thus, its importance in C and N cycles requires better estimates of its distribution at basin to global scales. However, existing algorithms to detect them from satellite have not yet been successful in the South Western Tropical Pacific (SP. Here, a novel algorithm (TRICHOdesmium SATellite based on radiance anomaly spectra (RAS observed in SeaWiFS imagery, is used to detect Trichodesmium during the austral summertime in the SP (5° S–25° S 160° E–170° W. Selected pixels are characterized by a restricted range of parameters quantifying RAS spectra (e.g. slope, intercept, curvature. The fraction of valid (non-cloudy pixels identified as Trichodesmium surface blooms in the region is low (between 0.01 and 0.2 %, but is about 100 times higher than deduced from previous algorithms. At daily scales in the SP, this fraction represents a total ocean surface area varying from 16 to 48 km2 in Winter and from 200 to 1000 km2 in Summer (and at monthly scale, from 500 to 1000 km2 in Winter and from 3100 to 10 890 km2 in Summer with a maximum of 26 432 km2 in January 1999. The daily distribution of Trichodesmium surface accumulations in the SP detected by TRICHOSAT is presented for the period 1998–2010 which demonstrates that the number of selected pixels peaks in November–February each year, consistent with field observations. This approach was validated with in situ observations of Trichodesmium surface accumulations in the Melanesian archipelago around New Caledonia, Vanuatu and Fiji Islands for the same period.
Behrens, F; Mackeben, M; Schröder-Preikschat, W
This analysis of time series of eye movements is a saccade-detection algorithm that is based on an earlier algorithm. It achieves substantial improvements by using an adaptive-threshold model instead of fixed thresholds and using the eye-movement acceleration signal. This has four advantages: (1) Adaptive thresholds are calculated automatically from the preceding acceleration data for detecting the beginning of a saccade, and thresholds are modified during the saccade. (2) The monotonicity of the position signal during the saccade, together with the acceleration with respect to the thresholds, is used to reliably determine the end of the saccade. (3) This allows differentiation between saccades following the main-sequence and non-main-sequence saccades. (4) Artifacts of various kinds can be detected and eliminated. The algorithm is demonstrated by applying it to human eye movement data (obtained by EOG) recorded during driving a car. A second demonstration of the algorithm detects microsleep episodes in eye movement data.
Masoom, Hassan; Adve, Raviraj S; Cobbold, Richard S C
A technique is proposed for the detection of abnormalities (targets) in ultrasound images using little or no a priori information and requiring little operator intervention. The scheme is a combination of the CLEAN algorithm, originally proposed for radio astronomy, and constant false alarm rate (CFAR) processing, as developed for use in radar systems. The CLEAN algorithm identifies areas in the ultrasound image that stand out above a threshold in relation to the background; CFAR techniques allow for an adaptive, semi-automated, selection of the threshold. Neither appears to have been previously used for target detection in ultrasound images and never together in any context. As a first step towards assessing the potential of this method we used a widely used method of simulating B-mode images (Field II). We assumed the use of a 256 element linear array operating at 3.0MHz into a water-like medium containing a density of point scatterers sufficient to simulate a background of fully developed speckle. Spherical targets with diameters ranging from 0.25 to 6.0mm and contrasts ranging from 0 to 12dB relative to the background were used as test objects. Using a contrast-detail analysis, the probability of detection curves indicate these targets can be consistently detected within a speckle background. Our results indicate that the method has considerable promise for the semi-automated detection of abnormalities with diameters greater than a few millimeters, depending on the contrast. Copyright © 2012 Elsevier B.V. All rights reserved.
Waheed, Khuram; Salem, Fathi M
Code division multiple access (CDMA) is based on the spread-spectrum technology and is a dominant air interface for 2.5G, 3G, and future wireless networks. For the CDMA downlink, the transmitted CDMA signals from the base station (BS) propagate through a noisy multipath fading communication channel before arriving at the receiver of the user equipment/mobile station (UE/MS). Classical CDMA single-user detection (SUD) algorithms implemented in the UE/MS receiver do not provide the required performance for modern high data-rate applications. In contrast, multi-user detection (MUD) approaches require a lot of a priori information not available to the UE/MS. In this paper, three promising adaptive Riemannian contra-variant (or natural) gradient based user detection approaches, capable of handling the highly dynamic wireless environments, are proposed. The first approach, blind multiuser detection (BMUD), is the process of simultaneously estimating multiple symbol sequences associated with all the users in the downlink of a CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. This approach is applicable to CDMA systems with relatively short spreading codes but becomes impractical for systems using long spreading codes. We also propose two other adaptive approaches, namely, RAKE -blind source recovery (RAKE-BSR) and RAKE-principal component analysis (RAKE-PCA) that fuse an adaptive stage into a standard RAKE receiver. This adaptation results in robust user detection algorithms with performance exceeding the linear minimum mean squared error (LMMSE) detectors for both Direct Sequence CDMA (DS-CDMA) and wide-band CDMA (WCDMA) systems under conditions of congestion, imprecise channel estimation and unmodeled multiple access interference (MAI).
Paulik, Róbert; Micsik, Tamás; Kiszler, Gábor; Kaszál, Péter; Székely, János; Paulik, Norbert; Várhalmi, Eszter; Prémusz, Viktória; Krenács, Tibor; Molnár, Béla
Nuclear estrogen receptor (ER), progesterone receptor (PR) and Ki-67 protein positive tumor cell fractions are semiquantitatively assessed in breast cancer for prognostic and predictive purposes. These biomarkers are usually revealed using immunoperoxidase methods resulting in diverse signal intensity and frequent inhomogeneity in tumor cell nuclei, which are routinely scored and interpreted by a pathologist during conventional light-microscopic examination. In the last decade digital pathology-based whole slide scanning and image analysis algorithms have shown tremendous development to support pathologists in this diagnostic process, which can directly influence patient selection for targeted- and chemotherapy. We have developed an image analysis algorithm optimized for whole slide quantification of nuclear immunostaining signals of ER, PR, and Ki-67 proteins in breast cancers. In this study, we tested the consistency and reliability of this system both in a series of brightfield and DAPI stained fluorescent samples. Our method allows the separation of overlapping cells and signals, reliable detection of vesicular nuclei and background compensation, especially in FISH stained slides. Detection accuracy and the processing speeds were validated in routinely immunostained breast cancer samples of varying reaction intensities and image qualities. Our technique supported automated nuclear signal detection with excellent efficacy: Precision Rate/Positive Predictive Value was 90.23 ± 4.29%, while Recall Rate/Sensitivity was 88.23 ± 4.84%. These factors and average counting speed of our algorithm were compared with two other open source applications (QuPath and CellProfiler) and resulted in 6-7% higher Recall Rate, while 4- to 30-fold higher processing speed. In conclusion, our image analysis algorithm can reliably detect and count nuclear signals in digital whole slides or any selected large areas i.e. hot spots, thus can support pathologists in assessing
Dell'Aquila, C. R.; Cañadas, G. E.; Correa, L. S.; Laciar, E.
This work describes the design of an algorithm for detecting apnea episodes, based on analysis of thorax respiratory effort signal. Inspiration and expiration time, and range amplitude of respiratory cycle were evaluated. For range analysis the standard deviation statistical tool was used over respiratory signal temporal windows. The validity of its performance was carried out in 8 records of Apnea-ECG database that has annotations of apnea episodes. The results are: sensitivity (Se) 73%, specificity (Sp) 83%. These values can be improving eliminating artifact of signal records.
Vela, Adan Ernesto
From 2010 to 2030, the number of instrument flight rules aircraft operations handled by Federal Aviation Administration en route traffic centers is predicted to increase from approximately 39 million flights to 64 million flights. The projected growth in air transportation demand is likely to result in traffic levels that exceed the abilities of the unaided air traffic controller in managing, separating, and providing services to aircraft. Consequently, the Federal Aviation Administration, and other air navigation service providers around the world, are making several efforts to improve the capacity and throughput of existing airspaces. Ultimately, the stated goal of the Federal Aviation Administration is to triple the available capacity of the National Airspace System by 2025. In an effort to satisfy air traffic demand through the increase of airspace capacity, air navigation service providers are considering the inclusion of advisory conflict-detection and resolution systems. In a human-in-the-loop framework, advisory conflict-detection and resolution decision-support tools identify potential conflicts and propose resolution commands for the air traffic controller to verify and issue to aircraft. A number of researchers and air navigation service providers hypothesize that the inclusion of combined conflict-detection and resolution tools into air traffic control systems will reduce or transform controller workload and enable the required increases in airspace capacity. In an effort to understand the potential workload implications of introducing advisory conflict-detection and resolution tools, this thesis provides a detailed study of the conflict event process and the implementation of conflict-detection and resolution algorithms. Specifically, the research presented here examines a metric of controller taskload: how many resolution commands an air traffic controller issues under the guidance of a conflict-detection and resolution decision-support tool. The goal
Brigden, M; Graydon, C
To audit a cohort of ambulatory outpatients with eosinophilia detected on automated blood cell counting. Specific objectives included the determination of whether the eosinophilia had been anticipated, the etiology of the eosinophilia, the clinical follow-up and investigations performed on patients with eosinophilia, and the effect of the detection of eosinophilia on patient management and ultimate clinical outcome. A year-long retrospective review of all patients with an absolute eosinophil count of greater than 0.7 x 10(9)/L. A large outpatient laboratory system. The patient population was managed by family physicians and specialists. Data collection included the results of the hematology profile, the absolute eosinophil count, the clinical situation responsible for the hematologic profile determination, and the probable cause of eosinophilia. Individual physicians were surveyed to determine if discovery of the eosinophilia had changed patient management plan or clinical outcome. Out of 195,300 patients who had a hematology profile performed, 225 were found to have an absolute eosinophilia count higher than 0.7 x 10(9)/L. The overall incidence of eosinophilia in the study population was 0.1%. The eosinophilia was not anticipated in 85% of patients. No obvious cause was detected for the eosinophilia in 36% of patients. Various allergic diseases were responsible for the eosinophilia in the majority of the remaining patients. Fewer than 9% of individuals manifested a serious systemic illness or parasitemia. Further clinical follow-up had been performed in 69% of patients. Additional laboratory tests had been ordered in 59% of patients. The laboratory tests most frequently ordered were a repeat hematology profile or stool examinations for ova and parasites. In only two instances did the discovery of the eosinophilia appear to result in a significant change in patient management or ultimate clinical income. The vast majority of eosinophilias detected in ambulatory
Gogula, Susmitha Valli; Divakar, Ch; Satyanarayana, Ch; Rao, Allam Appa
The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage.
Azofeifa, Joseph G; Allen, Mary A; Lladser, Manuel E; Dowell, Robin D
We present a fast and simple algorithm to detect nascent RNA transcription in global nuclear run-on sequencing (GRO-seq). GRO-seq is a relatively new protocol that captures nascent transcripts from actively engaged polymerase, providing a direct read-out on bona fide transcription. Most traditional assays, such as RNA-seq, measure steady state RNA levels which are affected by transcription, post-transcriptional processing, and RNA stability. GRO-seq data, however, presents unique analysis challenges that are only beginning to be addressed. Here, we describe a new algorithm, Fast Read Stitcher (FStitch), that takes advantage of two popular machine-learning techniques, hidden Markov models and logistic regression, to classify which regions of the genome are transcribed. Given a small user-defined training set, our algorithm is accurate, robust to varying read depth, annotation agnostic, and fast. Analysis of GRO-seq data without a priori need for annotation uncovers surprising new insights into several aspects of the transcription process.
ALEX RAJ S. M.
Full Text Available Underwater images raise new challenges in the field of digital image processing technology in recent years because of its widespread applications. There are many tangled matters to be considered in processing of images collected from water medium due to the adverse effects imposed by the environment itself. Image segmentation is preferred as basal stage of many digital image processing techniques which distinguish multiple segments in an image and reveal the hidden crucial information required for a peculiar application. There are so many general purpose algorithms and techniques that have been developed for image segmentation. Discontinuity based segmentation are most promising approach for image segmentation, in which Canny Edge detection based segmentation is more preferred for its high level of noise immunity and ability to tackle underwater environment. Since dealing with real time underwater image segmentation algorithm, which is computationally complex enough, an efficient hardware implementation is to be considered. The FPGA based realization of the referred segmentation algorithm is presented in this paper.
Rahman, Nurul Hidayah Ab; Abdullah, Nurul Azma; Hamid, Isredza Rahmi A.; Wen, Chuah Chai; Jelani, Mohamad Shafiqur Rahman Mohd
Closed-Circuit TV (CCTV) system is one of the technologies in surveillance field to solve the problem of detection and monitoring by providing extra features such as email alert or motion detection. However, detecting and alerting the admin on CCTV system may complicate due to the complexity to integrate the main program with an external Application Programming Interface (API). In this study, pixel processing algorithm is applied due to its efficiency and SMS alert is added as an alternative solution for users who opted out email alert system or have no Internet connection. A CCTV system with SMS alert (CMDSA) was developed using evolutionary prototyping methodology. The system interface was implemented using Microsoft Visual Studio while the backend components, which are database and coding, were implemented on SQLite database and C# programming language, respectively. The main modules of CMDSA are motion detection, capturing and saving video, image processing and Short Message Service (SMS) alert functions. Subsequently, the system is able to reduce the processing time making the detection process become faster, reduce the space and memory used to run the program and alerting the system admin instantly.
Lin, Ling; Li, Na; Li, Gang
Dynamic spectrum (DS) detection is attractive among the numerous noninvasive blood component detection methods because of the elimination of the main interference of the individual discrepancy and measure conditions. DS is a kind of spectrum extracted from the photoelectric pulse wave and closely relative to the artery blood. It can be used in a noninvasive blood component concentration examination. The key issues in DS detection are high detection precision and high operation speed. The precision of measure can be advanced by making use of over-sampling and lock-in amplifying on the pick-up of photoelectric pulse wave in DS detection. In the present paper, the theory expression formula of the over-sampling and lock-in amplifying method was deduced firstly. Then in order to overcome the problems of great data and excessive operation brought on by this technology, a quick algorithm based on LabVIEW and a method of using external C code applied in the pick-up of photoelectric pulse wave were presented. Experimental verification was conducted in the environment of LabVIEW. The results show that by the method pres ented, the speed of operation was promoted rapidly and the data memory was reduced largely.
Full Text Available This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train’s structure, the permanent magnet electrodynamic suspension (EDS train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally.
Dai, Chunhui; Long, Zhiqiang; Xie, Yunde; Xue, Song
This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS) train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train's structure, the permanent magnet electrodynamic suspension (EDS) train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD) and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally.
Qin, Xianxiang; Zhou, Shilin; Zou, Huanxin; Ren, Yun
In this paper, a novel CFAR algorithm for detecting layover and shadow areas in Interferometric synthetic aperture radar (InSAR) images is proposed. Firstly, the probability density function (PDF) of the square root amplitude of InSAR image is estimated by the kernel density estimation. Then, a CFAR algorithm combining with the morphological method for detecting both layover and shadow is presented. Finally, the proposed algorithm is evaluated on a real InSAR image obtained by TerraSAR-X system. The experimental results have validated the effectiveness of the proposed method.
Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald
In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other
Alexander V Maltsev
Full Text Available Local Ca2+ Releases (LCRs are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame sensitivity algorithm applied to each pixel (cell location. An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves, sparks and
Full Text Available Rear-end collisions are one of the most common types of accidents on roads. Global Satellite Navigation Systems (GNSS have recently become sufficiently flexible and cost-effective in order to have great potential for use in rear-end collision avoidance systems (CAS. Nevertheless, there are two main issues associated with current vehicle rear-end CAS: (1 achieving relative vehicle positioning and dynamic parameters with sufficiently high accuracy and (2 a reliable method to extract the car-following status from such information. This paper introduces a novel integrated algorithm for rear-end collision detection. Access to high accuracy positioning is enabled by GNSS, electronic compass, and lane information fusion with Cubature Kalman Filter (CKF. The judgment of the car-following status is based on the application of the Adaptive Neurofuzzy Inference System (ANFIS. The field test results show that the designed algorithm could effectively detect rear-end collisions with an accuracy of 99.61% and a false alarm rate of 5.26% in the 10 Hz output rate.
Full Text Available Dynamic properties such as natural frequencies and mode shapes are directly affected by damage in structures. In this paper, changes in natural frequencies and mode shapes were used as the input to various objective functions for damage detection. Objective functions related to natural frequencies, mode shapes, modal flexibility and modal strain energy have been used, and their performances have been analyzed in varying noise conditions. Three beams were analyzed: two of which were simulated beams with single and multiple damage scenarios and one was an experimental beam. In order to do this, SAP 2000 (v14, Computers and Structures Inc., Berkeley, CA, United States, 2009 is linked with MATLAB (r2015, The MathWorks, Inc., Natick, MA, United States, 2015. The genetic algorithm (GA, an evolutionary algorithm (EA, was used to update the damaged structure for damage detection. Due to the degradation of the performance of objective functions in varying noisy conditions, a modified objective function based on the concept of regularization has been proposed, which can be effectively used in combination with EA. All three beams were used to validate the proposed procedure. It has been found that the modified objective function gives better results even in noisy and actual experimental conditions.
Khaji, N. [Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Mehrjoo, M. [Islamic Azad University, Tehran (Iran, Islamic Republic of)
In this paper, a crack detection approach is presented for detecting depth and location of cracks in beam-like structures. For this purpose, a new beam element with an arbitrary number of embedded transverse edge cracks, in arbitrary positions of beam element with any depth, is derived. The components of the stiffness matrix for the cracked element are computed using the conjugate beam concept and Betti's theorem, and finally represented in closed-form expressions. The proposed beam element is efficiently employed for solving forward problem (i.e., to gain precise natural frequencies and mode shapes of the beam knowing the cracks' characteristics). To validate the proposed element, results obtained by new element are compared with two-dimensional (2D) finite element results and available experimental measurements. Moreover, by knowing the natural frequencies and mode shapes, an inverse problem is established in which the location and depth of cracks are determined. In the inverse approach, an optimization problem based on the new finite element and genetic algorithms (GAs) is solved to search the solution. It is shown that the present algorithm is able to identify various crack configurations in a cracked beam. The proposed approach is verified through a cracked beam containing various cracks with different depths.
Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.
Ramaswamy, Karthik; Agarwal, Sanjeev; Rao, Vittal S.
In this paper, an architecture for multisensor data fusion and evidence accumulation for landmine detection and discrimination is presented. Evidential and discriminatory information about the buried object such as shape, size, depth, and material, chemical or electromagnetic properties is obtained from different sensor and sensor algorithms. A streamlined assimilation of these varied information from dissimilar and non-homogenous sensor and sensor algorithms is presented. Information theory based pre-processing of the data and subsequent unsupervised clustering using Dignet architecture is used to capture the underlying structure of the information available from different sensors. Sensor information is categorized into type, size, depth, and position data channels. Each sensor may provide one or more of this information. Type data channel provides any relevant discriminatory characteristics of the buried object. A supervised feed-forward neural network is used to learn the causality between the cluster information and the evidence of a given class of the buried object. Size, depth and phenomenology input are used as control gating input for the neural network mapping. The supervisory feedback is provided by the output of the global sensor fusion system and accommodates both autonomous and human assisted learning. Dempster-Shafer evidential reasoning is used to accumulate different evidence from sensor channels and thus to detect and discriminate between different types of buried landmine and clutter. Performance of fusion architecture and Dempster-Shafer reasoning is studied using simulated data. For the simulated data noisy images of regular and irregular shapes of different objects are produced. Fourier descriptor, moment invariant and Matlab shape features are used to define the shape information of the objects. Evidence accumulation is done using shape and size information form each of the algorithms.
Thiele, S.; Grose, L.; Micklethwaite, S.
UAV-based photogrammetric and LiDAR techniques provide high resolution 3D point clouds and ortho-rectified photomontages that can capture surface geology in outstanding detail over wide areas. Automated and semi-automated methods are vital to extract full value from these data in practical time periods, though the nuances of geological structures and materials (natural variability in colour and geometry, soft and hard linkage, shadows and multiscale properties) make this a challenging task. We present a novel method for computer assisted trace detection in dense point clouds, using a lowest cost path solver to "follow" fracture traces and lithological contacts between user defined end points. This is achieved by defining a local neighbourhood network where each point in the cloud is linked to its neighbours, and then using a least-cost path algorithm to search this network and estimate the trace of the fracture or contact. A variety of different algorithms can then be applied to calculate the best fit plane, produce a fracture network, or map properties such as roughness, curvature and fracture intensity. Our prototype of this method (Fig. 1) suggests the technique is feasible and remarkably good at following traces under non-optimal conditions such as variable-shadow, partial occlusion and complex fracturing. Furthermore, if a fracture is initially mapped incorrectly, the user can easily provide further guidance by defining intermediate waypoints. Future development will include optimization of the algorithm to perform well on large point clouds and modifications that permit the detection of features such as step-overs. We also plan on implementing this approach in an interactive graphical user environment.
Mroz, Kamil; Battaglia, Alessandro; Lang, Timothy J.; Tanelli, Simone; Cecil, Daniel J.; Tridon, Frederic
By exploiting an abundant number of extreme storms observed simultaneously by the Global Precipitation Measurement (GPM) mission core satellite's suite of sensors and by the ground-based S-band Next-Generation Radar (NEXRAD) network over continental US, proxies for the identification of hail are developed based on the GPM core satellite observables. The full capabilities of the GPM observatory are tested by analyzing more than twenty observables and adopting the hydrometeor classification based on ground-based polarimetric measurements as truth. The proxies have been tested using the Critical Success Index (CSI) as a verification measure. The hail detection algorithm based on the mean Ku reflectivity in the mixed-phase layer performs the best, out of all considered proxies (CSI of 45%). Outside the Dual frequency Precipitation Radar (DPR) swath, the Polarization Corrected Temperature at 18.7 GHz shows the greatest potential for hail detection among all GMI channels (CSI of 26% at a threshold value of 261 K). When dual variable proxies are considered, the combination involving the mixed-phase reflectivity values at both Ku and Ka-bands outperforms all the other proxies, with a CSI of 49%. The best-performing radar-radiometer algorithm is based on the mixed-phase reflectivity at Ku-band and on the brightness temperature (TB) at 10.7 GHz (CSI of 46%). When only radiometric data are available, the algorithm based on the TBs at 36.6 and 166 GHz is the most efficient, with a CSI of 27.5%.
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.
Full Text Available The occurrence of series of events is always associated with the news report, social network, and Internet media. In this paper, a detecting system for public security events is designed, which carries out clustering operation to cluster relevant text data, in order to benefit relevant departments by evaluation and handling. Firstly, texts are mapped into three-dimensional space using the vector space model. Then, to overcome the shortcoming of the traditional clustering algorithm, an improved fuzzy c-means (FCM algorithm based on adaptive genetic algorithm and semisupervised learning is proposed. In the proposed algorithm, adaptive genetic algorithm is employed to select optimal initial clustering centers. Meanwhile, motivated by semisupervised learning, guiding effect of prior knowledge is used to accelerate iterative process. Finally, simulation experiments are conducted from two aspects of qualitative analysis and quantitative analysis, which demonstrate that the proposed algorithm performs excellently in improving clustering centers, clustering results, and consuming time.
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement
Spreco, A; Eriksson, O; Dahlström, Ö; Timpka, T
Methods for the detection of influenza epidemics and prediction of their progress have seldom been comparatively evaluated using prospective designs. This study aimed to perform a prospective comparative trial of algorithms for the detection and prediction of increased local influenza activity. Data on clinical influenza diagnoses recorded by physicians and syndromic data from a telenursing service were used. Five detection and three prediction algorithms previously evaluated in public health settings were calibrated and then evaluated over 3 years. When applied on diagnostic data, only detection using the Serfling regression method and prediction using the non-adaptive log-linear regression method showed acceptable performances during winter influenza seasons. For the syndromic data, none of the detection algorithms displayed a satisfactory performance, while non-adaptive log-linear regression was the best performing prediction method. We conclude that evidence was found for that available algorithms for influenza detection and prediction display satisfactory performance when applied on local diagnostic data during winter influenza seasons. When applied on local syndromic data, the evaluated algorithms did not display consistent performance. Further evaluations and research on combination of methods of these types in public health information infrastructures for 'nowcasting' (integrated detection and prediction) of influenza activity are warranted.
Rasmussen, J N; Voldstedlund, M; Andersen, R L
In Denmark recurrent epidemics of Mycoplasma pneumoniae infections have been described since the 1950s at intervals of approximately four to six years. The latest epidemic occurred in 2004/05 followed by two years of high incidence and more than three years of low incidence. Due to a recent incre...
Kim, So-Hyeong; Han, Ji-Hae; Suh, Myoung-Seok
In this study, we developed a hybrid fog detection algorithm (FDA) using AHI/Himawari-8 satellite and ground observation data for nighttime. In order to detect fog at nighttime, Dual Channel Difference (DCD) method based on the emissivity difference between SWIR and IR1 is most widely used. DCD is good at discriminating fog from other things (middle/high clouds, clear sea and land). However, it is difficult to distinguish fog from low clouds. In order to separate the low clouds from the pixels that satisfy the thresholds of fog in the DCD test, we conducted supplementary tests such as normalized local standard derivation (NLSD) of BT11 and the difference of fog top temperature (BT11) and air temperature (Ta) from NWP data (SST from OSTIA data). These tests are based on the larger homogeneity of fog top than low cloud tops and the similarity of fog top temperature and Ta (SST). Threshold values for the three tests were optimized through ROC analysis for the selected fog cases. In addition, considering the spatial continuity of fog, post-processing was performed to detect the missed pixels, in particular, at edge of fog or sub-pixel size fog. The final fog detection results are presented by fog probability (0 100 %). Validation was conducted by comparing fog detection probability with the ground observed visibility data from KMA. The validation results showed that POD and FAR are ranged from 0.70 0.94 and 0.45 0.72, respectively. The quantitative validation and visual inspection indicate that current FDA has a tendency to over-detect the fog. So, more works which reducing the FAR is needed. In the future, we will also validate sea fog using CALIPSO data.
Steiner, N.; Tedesco, M.
Melting is mapped over Antarctica at a high spatial resolution using a novel melt-detection algorithm based on wavelets and multi-scale analysis. The method is applied to Ku band (13.4 GHz) normalized backscattering measured by SeaWinds on QuikSCAT and spatially enhanced on a 5 km grid over the operational life of the sensor (1999-2009). Wavelet-based estimates of melt spatial extent and duration are compared with those obtained by means of threshold-based detection methods, where melting is detected when the measured backscattering is 3 dB below the preceding winter mean value. Results from both methods are assessed by means of Automatic Weather Station (AWS) air surface temperature records. The yearly melting index, the product of melted area and melting duration, found using a fixed threshold and wavelet-based melt algorithm are found to have a relative difference within 7% for all years. The majority of the difference between melting records determined from QuikSCAT are related to short-duration backscatter changes identified as melting using the threshold methodology but not the wavelet-based method. Compared with AWS records both methods show a relative accuracy to within 10% based on estimated melt conditions using air temperatures. Melting maps obtained with the wavelet-based approach are also compared with those obtained from spaceborne brightness temperatures recorded by the Special Sensor Microwave/Image (SSMI). With respect to passive microwave records, we find a higher degree of agreement (9% relative difference) for the melting index using the wavelet-based approach than threshold-based methods (11% relative difference). Additionally, linkages between melting variability and the Southern Annular Mode (SAM), an important large-scale climate driver for Antarctica, are suggested by the results using wavelet based methods that are not found using threshold-based methods.
Chen, Yan; Guo, Jeff J; Healy, Daniel P; Lin, Xiaodong; Patel, Nick C
With the exception of case reports, limited data are available regarding the risk of hepatotoxicity associated with the use of telithromycin. To detect the safety signal regarding the reporting of hepatotoxicity associated with the use of telithromycin using 4 commonly employed data mining algorithms (DMAs). Based on the Adverse Events Reporting System (AERS) database of the Food and Drug Administration, 4 DMAs, including the reporting odds ratio (ROR), the proportional reporting ratio (PRR), the information component (IC), and the Gamma Poisson Shrinker (GPS), were applied to examine the association between the reporting of hepatotoxicity and the use of telithromycin. The study period was from the first quarter of 2004 to the second quarter of 2006. The reporting of hepatotoxicity was identified using the preferred terms indexed in the Medical Dictionary for Regulatory Activities. The drug name was used to identify reports regarding the use of telithromycin. A total of 226 reports describing hepatotoxicity associated with the use of telithromycin were recorded in the AERS. A safety problem of telithromycin associated with increased reporting of hepatotoxicity was clearly detected by 4 algorithms as early as 2005, signaling the problem in the first quarter by the ROR and the IC, in the second quarter by the PRR, and in the fourth quarter by the GPS. A safety signal was indicated by the 4 DMAs suggesting an association between the reporting of hepatotoxicity and the use of telithromycin. Given the wide use of telithromycin and serious consequences of hepatotoxicity, clinicians should be cautious when selecting telithromycin for treatment of an infection. In addition, further observational studies are required to evaluate the utility of signal detection systems for early recognition of serious, life-threatening, low-frequency drug-induced adverse events.
Hawramy Tahir Abdullah Hussein
Full Text Available Abstract Background Fascioliasis is an often-neglected zoonotic disease and currently is an emerging infection in Iraq. Fascioliasis has two distinct phases, an acute phase, exhibiting the hepatic migratory stage of the fluke’s life cycle, and a chronic biliary phase manifested with the presence of the parasite in the bile ducts through hepatic tissue. The incidence of Fascioliasis in Sulaimaniyah governorate was unexpected observation. We believe that shedding light on this disease in our locality will increase our physician awareness and experience in early detection, treatment in order to avoid unnecessary surgeries. Findings We retrospectively evaluated this disease in terms of the demographic features, clinical presentations, and managements by reviewing the medical records of 18 patients, who were admitted to the Sulaimani Teaching Hospital and Kurdistan Centre for Gastroenterology and Hepatology. Patients were complained from hepatobiliary and/or upper gastrointestinal symptoms and diagnosed accidentally with Fascioliasis during hepatobiliary surgeries and ERCP by direct visualization of the flukes and stone analysis. Elevated liver enzymes, white blood cells count and eosinophilia were notable laboratory indices. The dilated CBD, gallstones, liver cysts and abscess were found common in radiological images. Fascioliasis diagnosed during conventional surgical CBD exploration and choledochodoudenostomy, open cholecystectomy, surgical drainage of liver abscess, ERCP and during gallstone analysis. Conclusion Fascioliasis is indeed an emerging disease in our locality, but it is often underestimated and ignored. We recommend the differential diagnosis of patients suffering from Rt. Hypochondrial pain, fever and eosinophilia. The watercress ingestion was a common factor in patient’s history.
Aubin, S; Beaulieu, L; Pouliot, S; Pouliot, J; Roy, R; Girouard, L M; Martel-Brisson, N; Vigneault, E; Laverdière, J
An algorithm for the daily localization of the prostate using implanted markers and a standard video-based electronic portal imaging device (V-EPID) has been tested. Prior to planning, three gold markers were implanted in the prostate of seven patients. The clinical images were acquired with a BeamViewPlus 2.1 V-EPID for each field during the normal course radiotherapy treatment and are used off-line to determine the ability of the automatic marker detection algorithm to adequately and consistently detect the markers. Clinical images were obtained with various dose levels from ranging 2.5 to 75 MU. The algorithm is based on marker attenuation characterization in the portal image and spatial distribution. A total of 1182 clinical images were taken. The results show an average efficiency of 93% for the markers detected individually and 85% for the group of markers. This algorithm accomplishes the detection and validation in 0.20-0.40 s. When the center of mass of the group of implanted markers is used, then all displacements can be corrected to within 1.0 mm in 84% of the cases and within 1.5 mm in 97% of cases. The standard video-based EPID tested provides excellent marker detection capability even with low dose levels. The V-EPID can be used successfully with radiopaque markers and the automatic detection algorithm to track and correct the daily setup deviations due to organ motions.
Ma, Xiaoke; Wang, Bingbo; Yu, Liang
Community detection is fundamental for revealing the structure-functionality relationship in complex networks, which involves two issues-the quantitative function for community as well as algorithms to discover communities. Despite significant research on either of them, few attempt has been made to establish the connection between the two issues. To attack this problem, a generalized quantification function is proposed for community in weighted networks, which provides a framework that unifies several well-known measures. Then, we prove that the trace optimization of the proposed measure is equivalent with the objective functions of algorithms such as nonnegative matrix factorization, kernel K-means as well as spectral clustering. It serves as the theoretical foundation for designing algorithms for community detection. On the second issue, a semi-supervised spectral clustering algorithm is developed by exploring the equivalence relation via combining the nonnegative matrix factorization and spectral clustering. Different from the traditional semi-supervised algorithms, the partial supervision is integrated into the objective of the spectral algorithm. Finally, through extensive experiments on both artificial and real world networks, we demonstrate that the proposed method improves the accuracy of the traditional spectral algorithms in community detection.
Moon, Hee-Won; Kim, Hyeong Nyeon; Hur, Mina; Shim, Hee Sook; Kim, Heejung; Yun, Yeo-Min
Since every single test has some limitations for detecting toxigenic Clostridium difficile, multistep algorithms are recommended. This study aimed to compare the current, representative diagnostic algorithms for detecting toxigenic C. difficile, using VIDAS C. difficile toxin A&B (toxin ELFA), VIDAS C. difficile GDH (GDH ELFA, bioMérieux, Marcy-l'Etoile, France), and Xpert C. difficile (Cepheid, Sunnyvale, California, USA). In 271 consecutive stool samples, toxigenic culture, toxin ELFA, GDH ELFA, and Xpert C. difficile were performed. We simulated two algorithms: screening by GDH ELFA and confirmation by Xpert C. difficile (GDH + Xpert) and combined algorithm of GDH ELFA, toxin ELFA, and Xpert C. difficile (GDH + Toxin + Xpert). The performance of each assay and algorithm was assessed. The agreement of Xpert C. difficile and two algorithms (GDH + Xpert and GDH+ Toxin + Xpert) with toxigenic culture were strong (Kappa, 0.848, 0.857, and 0.868, respectively). The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of algorithms (GDH + Xpert and GDH + Toxin + Xpert) were 96.7%, 95.8%, 85.0%, 98.1%, and 94.5%, 95.8%, 82.3%, 98.5%, respectively. There were no significant differences between Xpert C. difficile and two algorithms in sensitivity, specificity, PPV and NPV. The performances of both algorithms for detecting toxigenic C. difficile were comparable to that of Xpert C. difficile. Either algorithm would be useful in clinical laboratories and can be optimized in the diagnostic workflow of C. difficile depending on costs, test volume, and clinical needs.
W. Wang; J.J. Qu; X. Hao; Y. Liu
In the southeastern United States, most wildland fires are of low intensity. A substantial number of these fires cannot be detected by the MODIS contextual algorithm. To improve the accuracy of fire detection for this region, the remote-sensed characteristics of these fires have to be...
Muhammad Aizzat Zakaria
Full Text Available Trajectory tracking is an important aspect of autonomous vehicles. The idea behind trajectory tracking is the ability of the vehicle to follow a predefined path with zero steady state error. The difficulty arises due to the nonlinearity of vehicle dynamics. Therefore, this paper proposes a stable tracking control for an autonomous vehicle. An approach that consists of steering wheel control and lateral control is introduced. This control algorithm is used for a non-holonomic navigation problem, namely tracking a reference trajectory in a closed loop form. A proposed future prediction point control algorithm is used to calculate the vehicle's lateral error in order to improve the performance of the trajectory tracking. A feedback sensor signal from the steering wheel angle and yaw rate sensor is used as feedback information for the controller. The controller consists of a relationship between the future point lateral error, the linear velocity, the heading error and the reference yaw rate. This paper also introduces a spike detection algorithm to track the spike error that occurs during GPS reading. The proposed idea is to take the advantage of the derivative of the steering rate. This paper aims to tackle the lateral error problem by applying the steering control law to the vehicle, and proposes a new path tracking control method by considering the future coordinate of the vehicle and the future estimated lateral error. The effectiveness of the proposed controller is demonstrated by a simulation and a GPS experiment with noisy data. The approach used in this paper is not limited to autonomous vehicles alone since the concept of autonomous vehicle tracking can be used in mobile robot platforms, as the kinematic model of these two platforms is similar.
Sommermeyer, Dirk; Zou, Ding; Ficker, Joachim H; Randerath, Winfried; Fischer, Christoph; Penzel, Thomas; Sanner, Bernd; Hedner, Jan; Grote, Ludger
Cardiovascular disease is the main cause of death in Europe, and early detection of increased cardiovascular risk (CR) is of clinical importance. Pulse wave analysis based on pulse oximetry has proven useful for the recognition of increased CR. The current study provides a detailed description of the pulse wave analysis technology and its clinical application. A novel matching pursuit-based feature extraction algorithm was applied for signal decomposition of the overnight photoplethysmographic pulse wave signals obtained by a single-pulse oximeter sensor. The algorithm computes nine parameters (pulse index, SpO2 index, pulse wave amplitude index, respiratory-related pulse oscillations, pulse propagation time, periodic and symmetric desaturations, time under 90 % SpO2, difference between pulse and SpO2 index, and arrhythmia). The technology was applied in 631 patients referred for a sleep study with suspected sleep apnea. The technical failure rate was 1.4 %. Anthropometric data like age and BMI correlated significantly with measures of vascular stiffness and pulse rate variability (PPT and age r = -0.54, p < 0.001, PR and age r = -0.36, p < 0.01). The composite biosignal risk score showed a dose-response relationship with the number of CR factors (p < 0.001) and was further elevated in patients with sleep apnea (AHI ≥ 15n/h; p < 0.001). The developed algorithm extracts meaningful parameters indicative of cardiorespiratory and autonomic nervous system function and dysfunction in patients suspected of SDB.
Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin
Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.
Rohulla Kosari Langari
Full Text Available Change the world through information technology and Internet development, has created competitive knowledge in the field of electronic commerce, lead to increasing in competitive potential among organizations. In this condition The increasing rate of commercial deals developing guaranteed with speed and light quality is due to provide dynamic system of electronic banking until by using modern technology to facilitate electronic business process. Internet banking is enumerate as a potential opportunity the fundamental pillars and determinates of e-banking that in cyber space has been faced with various obstacles and threats. One of this challenge is complete uncertainty in security guarantee of financial transactions also exist of suspicious and unusual behavior with mail fraud for financial abuse. Now various systems because of intelligence mechanical methods and data mining technique has been designed for fraud detection in users’ behaviors and applied in various industrial such as insurance, medicine and banking. Main of article has been recognizing of unusual users behaviors in e-banking system. Therefore, detection behavior user and categories of emerged patterns to paper the conditions for predicting unauthorized penetration and detection of suspicious behavior. Since detection behavior user in internet system has been uncertainty and records of transactions can be useful to understand these movement and therefore among machine method, decision tree technique is considered common tool for classification and prediction, therefore in this research at first has determinate banking effective variable and weight of everything in internet behaviors production and in continuation combining of various behaviors manner draw out such as the model of inductive rules to provide ability recognizing of different behaviors. At least trend of four algorithm Chaid, ex_Chaid, C4.5, C5.0 has compared and evaluated for classification and detection of exist
Full Text Available Water body detection worldwide using spaceborne remote sensing is a challenging task. A global scale multi-temporal and multi-spectral image analysis method for water body detection was developed. The PROBA-V microsatellite has been fully operational since December 2013 and delivers daily near-global synthesis with a spatial resolution of 1 km and 333 m. The Red, Near-InfRared (NIR and Short Wave InfRared (SWIR bands of the atmospherically corrected 10-day synthesis images are first Hue, Saturation and Value (HSV color transformed and subsequently used in a decision tree classification for water body detection. To minimize commission errors four additional data layers are used: the Normalized Difference Vegetation Index (NDVI, Water Body Potential Mask (WBPM, Permanent Glacier Mask (PGM and Volcanic Soil Mask (VSM. Threshold values on the hue and value bands, expressed by a parabolic function, are used to detect the water bodies. Beside the water bodies layer, a quality layer, based on the water bodies occurrences, is available in the output product. The performance of the Water Bodies Detection Algorithm (WBDA was assessed using Landsat 8 scenes over 15 regions selected worldwide. A mean Commission Error (CE of 1.5% was obtained while a mean Omission Error (OE of 15.4% was obtained for minimum Water Surface Ratio (WSR = 0.5 and drops to 9.8% for minimum WSR = 0.6. Here, WSR is defined as the fraction of the PROBA-V pixel covered by water as derived from high spatial resolution images, e.g., Landsat 8. Both the CE = 1.5% and OE = 9.8% (WSR = 0.6 fall within the user requirements of 15%. The WBDA is fully operational in the Copernicus Global Land Service and products are freely available.
Lee, Jiseon; Park, Junhee; Yang, Sejung; Kim, Hani; Choi, Yun Seo; Kim, Hyeon Jin; Lee, Hyang Woon; Lee, Byung-Uk
The use of automatic electrical stimulation in response to early seizure detection has been introduced as a new treatment for intractable epilepsy. For the effective application of this method as a successful treatment, improving the accuracy of the early seizure detection is crucial. In this paper, we proposed the application of a frequency-based algorithm derived from principal component analysis (PCA), and demonstrated improved efficacy for early seizure detection in a pilocarpine-induced epilepsy rat model. A total of 100 ictal electroencephalographs (EEG) during spontaneous recurrent seizures from 11 epileptic rats were finally included for the analysis. PCA was applied to the covariance matrix of a conventional EEG frequency band signal. Two PCA results were compared: one from the initial segment of seizures (5 sec of seizure onset) and the other from the whole segment of seizures. In order to compare the accuracy, we obtained the specific threshold satisfying the target performance from the training set, and compared the False Positive (FP), False Negative (FN), and Latency (Lat) of the PCA based feature derived from the initial segment of seizures to the other six features in the testing set. The PCA based feature derived from the initial segment of seizures performed significantly better than other features with a 1.40% FP, zero FN, and 0.14 s Lat. These results demonstrated that the proposed frequency-based feature from PCA that captures the characteristics of the initial phase of seizure was effective for early detection of seizures. Experiments with rat ictal EEGs showed an improved early seizure detection rate with PCA applied to the covariance of the initial 5 s segment of visual seizure onset instead of using the whole seizure segment or other conventional frequency bands.
Full Text Available The generalized Radon-Fourier transform (GRFT has been proposed to detect radar weak maneuvering targets by realizing coherent integration via jointly searching in motion parameter space. Two main drawbacks of GRFT are the heavy computational burden and the blind speed side lobes (BSSL which will cause serious false alarms. The BSSL learning-based particle swarm optimization (BPSO has been proposed before to reduce the computational burden of GRFT and solve the BSSL problem simultaneously. However, the BPSO suffers from an apparent loss in detection performance compared with GRFT. In this paper, a fast implementation algorithm of GRFT using the BSSL learning-based modified wind-driven optimization (BMWDO is proposed. In the BMWDO, the BSSL learning procedure is also used to deal with the BSSL phenomenon. Besides, the MWDO adjusts the coefficients in WDO with Levy distribution and uniform distribution, and it outperforms PSO in a noisy environment. Compared with BPSO, the proposed method can achieve better detection performance with a similar computational cost. Several numerical experiments are also provided to demonstrate the effectiveness of the proposed method.
Full Text Available Contemporary satellite Earth Observation systems provide growing amounts of very high spatial resolution data that can be used in various applications. An increasing number of sensors make it possible to monitor selected areas in great detail. However, in order to handle the volume of data, a high level of automation is required. The semi-automatic change detection methodology described in this paper was developed to annually update land cover maps prepared in the context of the Geoland2. The proposed algorithm was tailored to work with different very high spatial resolution images acquired over different European landscapes. The methodology is a fusion of various change detection methods ranging from: (1 layer arithmetic; (2 vegetation indices (NDVI differentiating; (3 texture calculation; and methods based on (4 canonical correlation analysis (multivariate alteration detection (MAD. User intervention during the production of the change map is limited to the selection of the input data, the size of initial segments and the threshold for texture classification (optionally. To achieve a high level of automation, statistical thresholds were applied in most of the processing steps. Tests showed an overall change recognition accuracy of 89%, and the change type classification methodology can accurately classify transitions between classes.
A A Kalteh
Full Text Available Breast cancer is the second leading cause of death for women all over the world. The correct diagnosis of breast cancer is one of the major problems in the medical field. From the literature it has been found that different pattern recognition techniques can help them to improve in this domain. This paper presents a novel hybrid intelligent method for detection of breast cancer. The proposed method includes two main modules: Clustering module and the classifier module. In the clustering module, first the input data will be clustered by a new technique. This technique is a suitable combination of the modified imperialist competitive algorithm (MICA and K-means algorithm. Then the Euclidean distance of each pattern is computed from the determined clusters. The classifier module determines the membership of the patterns using the computed distance. In this module, several neural networks, such as the multilayer perceptron, probabilistic neural networks and the radial basis function neural networks are investigated. Using the experimental study, we choose the best classifier in order to recognize the breast cancer. The proposed system is tested on Wisconsin Breast Cancer (WBC database and the simulation results show that the recommended system has high accuracy.
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.
Ling, Steve S H; Nguyen, Hung T
Hypoglycemia or low blood glucose is dangerous and can result in unconsciousness, seizures, and even death. It is a common and serious side effect of insulin therapy in patients with diabetes. Hypoglycemic monitor is a noninvasive monitor that measures some physiological parameters continuously to provide detection of hypoglycemic episodes in type 1 diabetes mellitus patients (T1DM). Based on heart rate (HR), corrected QT interval of the ECG signal, change of HR, and the change of corrected QT interval, we develop a genetic algorithm (GA)-based multiple regression with fuzzy inference system (FIS) to classify the presence of hypoglycemic episodes. GA is used to find the optimal fuzzy rules and membership functions of FIS and the model parameters of regression method. From a clinical study of 16 children with T1DM, natural occurrence of nocturnal hypoglycemic episodes is associated with HRs and corrected QT intervals. The overall data were organized into a training set (eight patients) and a testing set (another eight patients) randomly selected. The results show that the proposed algorithm performs a good sensitivity with an acceptable specificity.
Cournapeau, David; Kawahara, Tatsuya
A new online, unsupervised voice activity detection (VAD) method is proposed. The method is based on a feature derived from high-order statistics (HOS), enhanced by a second metric based on normalized autocorrelation peaks to improve its robustness to non-Gaussian noises. This feature is also oriented for discriminating between close-talk and far-field speech, thus providing a VAD method in the context of human-to-human interaction independent of the energy level. The classification is done by an online variation of the Expectation-Maximization (EM) algorithm, to track and adapt to noise variations in the speech signal. Performance of the proposed method is evaluated on an in-house data and on CENSREC-1-C, a publicly available database used for VAD in the context of automatic speech recognition (ASR). On both test sets, the proposed method outperforms a simple energy-based algorithm and is shown to be more robust against the change in speech sparsity, SNR variability and the noise type.
Salem, Christian; Azar, Danielle; Tokajian, Sima
Melanoma skin cancer is the most aggressive type of skin cancer. It is most commonly caused by excessive exposure to Ultraviolet radiation which triggers uncontrollable proliferation of melanocytes. Early detection makes melanoma relatively easily curable. Diagnosis is usually done using traditional methods such as dermoscopy which consists of a manual examination performed by the physician. However, these methods are not always well founded because they depend heavily on the physician's experience. Hence, there is a great need for a new automated approach in order to make diagnosis more reliable. In this paper, we present a twophase technique to classify images of lesions into benign or malignant. The first phase consists of an image processing-based method that extracts the Asymmetry, Border Irregularity, Color Variation and Diameter of a given mole. The second phase classifies lesions using a Genetic Algorithm. Our technique shows a significant improvement over other well-known algorithms and proves to be more stable on both training and testing data. Schattauer GmbH.
Full Text Available In recent years, automation in agricultural field has attracted more attention of researchers and greenhouse producers. The main reasons are to reduce the cost including labor cost and to reduce the hard working conditions in greenhouse. In present research, a vision system of harvesting robot was developed for recognition of green sweet pepper on plant under natural light. The major challenge of this study was noticeable color similarity between sweet pepper and plant leaves. To overcome this challenge, a new texture index based on edge density approximation (EDA has been defined and utilized in combination with color indices such as Hue, Saturation and excessive green index (EGI. Fifty images were captured from fifty sweet pepper plants to evaluate the algorithm. The algorithm could recognize 92 out of 107 (i. e., the detection accuracy of 86% sweet peppers located within the workspace of robot. The error of system in recognition of background, mostly leaves, as a green sweet pepper, decreased 92.98% by using the new defined texture index in comparison with color analysis. This showed the importance of integration of texture with color features when used for recognizing sweet peppers. The main reasons of errors, besides color similarity, were waxy and rough surface of sweet pepper that cause higher reflectance and non-uniform lighting on surface, respectively.
Anthony P. Malanoski
Full Text Available Here, we describe our efforts focused on development of an algorithm for identification of detection events in a real-time sensing application relying on reporting of color values using commercially available color sensing chips. The effort focuses on the identification of event occurrence, rather than target identification, and utilizes approaches suitable to onboard device incorporation to facilitate portable and autonomous use. The described algorithm first excludes electronic noise generated by the sensor system and determines response thresholds. This automatic adjustment provides the potential for use with device variations as well as accommodating differing indicator behaviors. Multiple signal channels (RGB as well as multiple indicator array elements are combined for reporting of an event with a minimum of false responses. While the method reported was developed for use with paper-supported porphyrin and metalloporphyrin indicators, it should be equally applicable to other colorimetric indicators. Depending on device configurations, receiver operating characteristic (ROC sensitivities of 1 could be obtained with specificities of 0.87 (threshold 160 ppb, ethanol.
Sampe, Jahariah; Othman, Masuri
This study presents a proposed Fast Detection Anti-Collision Algorithm (FDACA) for Radio Frequency Identification (RFID) system. Our proposed FDACA is implemented on-chip using Application Specific Integrated Circuit (ASIC) technology and the algorithm is based on the deterministic anti-collision technique. The FDACA is novel in terms of a faster identification by reducing the number of iterations during the identification process. The primary FDACA also reads the identification (ID) bits at once regardless of its length. It also does not require the tags to remember the instructions from the reader during the communication process in which the tags are treated as address carrying devices only. As a result simple, small, low cost and memoryless tags can be produced. The proposed system is designed using Verilog HDL. The system is simulated using Modelsim XE II and synthesized using Xilinx Synthesis Technology (XST). The system is implemented in hardware using Field Programmable Grid Array (FPGA) board for real time verification. From the verification results it can be shown that the FDACA system enables to identify the tags without error until the operating frequency of 180 MHZ. Finally the FDACA system is implemented on chip using 0.18 μm Library, Synopsys Compiler and tools. From the resynthesis results it shows that the identification rate of the proposed FDACA system is 333 Mega tags per second with the power requirement of 3.451 mW.
Wodecki, Jacek; Michalak, Anna; Zimroz, Radoslaw
Harsh industrial conditions present in underground mining cause a lot of difficulties for local damage detection in heavy-duty machinery. For vibration signals one of the most intuitive approaches of obtaining signal with expected properties, such as clearly visible informative features, is prefiltration with appropriately prepared filter. Design of such filter is very broad field of research on its own. In this paper authors propose a novel approach to dedicated optimal filter design using progressive genetic algorithm. Presented method is fully data-driven and requires no prior knowledge of the signal. It has been tested against a set of real and simulated data. Effectiveness of operation has been proven for both healthy and damaged case. Termination criterion for evolution process was developed, and diagnostic decision making feature has been proposed for final result determinance.
Veta, Mitko; van Diest, Paul J; Willems, Stefan M; Wang, Haibo; Madabhushi, Anant; Cruz-Roa, Angel; Gonzalez, Fabio; Larsen, Anders B L; Vestergaard, Jacob S; Dahl, Anders B; Cireşan, Dan C; Schmidhuber, Jürgen; Giusti, Alessandro; Gambardella, Luca M; Tek, F Boray; Walter, Thomas; Wang, Ching-Wei; Kondo, Satoshi; Matuszewski, Bogdan J; Precioso, Frederic; Snell, Violet; Kittler, Josef; de Campos, Teofilo E; Khan, Adnan M; Rajpoot, Nasir M; Arkoumani, Evdokia; Lacle, Miangela M; Viergever, Max A; Pluim, Josien P W
The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues. In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists. Copyright © 2014 Elsevier B.V. All rights reserved.
Lundbeck, Micha; Hartog, Laura; Grimm, Giso
The influence of hearing aid (HA) signal processing on the perception of spatially dynamic sounds has not been systematically investigated so far. Previously, we observed that interfering sounds impaired the detectability of left-right source movements and reverberation that of near-far source...... movements for elderly hearing-impaired (EHI) listeners (Lundbeck et al., 2017). Here, we explored potential ways of improving these deficits with HAs. To that end, we carried out acoustic analyses to examine the impact of two beamforming algorithms and a binaural coherence-based noise reduction scheme...... on the cues underlying movement perception. While binaural cues remained mostly unchanged, there were greater monaural spectral changes and increases in signal-to-noise ratio and direct-to-reverberant sound ratio as a result of the applied processing. Based on these findings, we conducted a listening test...
Zuo, Pingbing; Feng, Xueshang; Wang, Yi; Xie, Yanqiong; Li, Huijun; Xu, Xiaojun
Dynamic pressure pulses (DPPs) in the solar wind are a significant phenomenon closely related to the solar-terrestrial connection and physical processes of solar wind dynamics. In order to automatically identify DPPs from solar wind measurements, we develop a procedure with a three-step detection algorithm that is able to rapidly select DPPs from the plasma data stream and simultaneously define the transition region where large dynamic pressure variations occur and demarcate the upstream and downstream region by selecting the relatively quiet status before and after the abrupt change in dynamic pressure. To demonstrate the usefulness, efficiency, and accuracy of this procedure, we have applied it to the Wind observations from 1996 to 2008 by successfully obtaining the DPPs. The procedure can also be applied to other solar wind spacecraft observation data sets with different time resolutions
Cristian Mihai Enescu
Full Text Available The aim of this study was to test the Bayesian algorithm implemented in the software STRUCTURE in order to detect the number of clusters, by using microsatellite data from four oak species. Several assignment models, with or without a priori grouping of individuals to species, were proposed. Better results were obtained by using the sampling location information and when only two taxa were analyzed. Particularly, pedunculate oak and sessile oak formed distinct clusters whatever the assignment model we use. By contrast, no separation between the two oaks from series Lanuginosae was observed. This can be explained, on one hand, by the small sampling size for Italian oak, or by the genetic similarities of the two pubescent oaks, namely Quercus pubescens and Q. virgiliana, on the other hand. Our findings support the hypothesis according which Italian oak is an intraspecific taxonomic unit of pubescent oak.
Zhang, Peng; Wang, Duo; Xiao, Jinghua
Recommender system offers a powerful tool to make information overload problem well solved and thus gains wide concerns of scholars and engineers. A key challenge is how to make recommendations more accurate and personalized. We notice that community structures widely exist in many real networks, which could significantly affect the recommendation results. By incorporating the information of detected communities in the recommendation algorithms, an improved recommendation approach for the networks with communities is proposed. The approach is examined in both artificial and real networks, the results show that the improvement on accuracy and diversity can be 20% and 7%, respectively. This reveals that it is beneficial to classify the nodes based on the inherent properties in recommender systems.
Full Text Available A 4+12+16 amplitude phase shift keying (APSK modulation outperforms other 32-APSK modulations in a nonlinear additive white Gaussian noise (AWGN channel because of its intrinsic robustness against AM/AM and AM/PM distortions caused by the nonlinear characteristics of a high-power amplifier. Thus, this modulation scheme has been adopted in the digital video broadcasting-satellite2 European standard. And it has been considered for high rate transmission of telemetry data on deep space communications in consultative committee for space data systems which provides a forum for discussion of common problems in the development and operation of space data systems. In this paper, we present an improved bits-to-symbol mapping scheme with a better bit error rate for a 4+12+16 APSK signal in a nonlinear AWGN channel and propose a simple signal detection algorithm for the 4+12+16 APSK from the presented bit mapping.
Lundbeck, Micha; Hartog, Laura; Grimm, Giso
The influence of hearing aid (HA) signal processing on the perception of spatially dynamic sounds has not been systematically investigated so far. Previously, we observed that for elderly hearing-impaired (EHI) listeners concurrent distractor sounds impaired the detectability of left-right source...... movements, and reverberation that of near-far source movements (Lundbeck et al, Trends Hear 2017). Here, we explored potential ways of improving these deficits with HAs. To that end, we carried out detailed acoustic analyses on the stimuli used previously to examine the impact of two beamforming algorithms...... and a binaural coherence-based noise reduction scheme on the cues underlying movement perception. While the binaural cues remained mostly unchanged, the applied processing led to greater monaural spectral changes, as well as increases in signal-to-noise ratio and direct-to-reverberant sound ratio. Based...
Gatos, Ilias; Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Theotokas, Ioannis; Zoumpoulis, Pavlos; Hazle, John D; Kagadis, George C
Detect and classify focal liver lesions (FLLs) from contrast-enhanced ultrasound (CEUS) imaging by means of an automated quantification algorithm. The proposed algorithm employs a sophisticated segmentation method to detect and contour focal lesions from 52 CEUS video sequences (30 benign and 22 malignant). Lesion detection involves wavelet transform zero crossings utilization as an initialization step to the Markov random field model toward the lesion contour extraction. After FLL detection across frames, time intensity curve (TIC) is computed which provides the contrast agents' behavior at all vascular phases with respect to adjacent parenchyma for each patient. From each TIC, eight features were automatically calculated and employed into the support vector machines (SVMs) classification algorithm in the design of the image analysis model. With regard to FLLs detection accuracy, all lesions detected had an average overlap value of 0.89 ± 0.16 with manual segmentations for all CEUS frame-subsets included in the study. Highest classification accuracy from the SVM model was 90.3%, misdiagnosing three benign and two malignant FLLs with sensitivity and specificity values of 93.1% and 86.9%, respectively. The proposed quantification system that employs FLLs detection and classification algorithms may be of value to physicians as a second opinion tool for avoiding unnecessary invasive procedures.
Hermes, Laurie G.; Witt, Arthur; Smith, Steven D.; Klingle-Wilson, Diana; Morris, Dale; Stumpf, Gregory J.; Eilts, Michael D.
The Federal Aviation Administration's (FAA) Terminal Doppler Weather Radar (TDWR) system was primarily designed to address the operational needs of pilots in the avoidance of low-altitude wind shears upon takeoff and landing at airports. One of the primary methods of wind-shear detection for the TDWR system is the gust-front detection algorithm. The algorithm is designed to detect gust fronts that produce a wind-shear hazard and/or sustained wind shifts. It serves the hazard warning function by providing an estimate of the wind-speed gain for aircraft penetrating the gust front. The gust-front detection and wind-shift algorithms together serve a planning function by providing forecasted gust-front locations and estimates of the horizontal wind vector behind the front, respectively. This information is used by air traffic managers to determine arrival and departure runway configurations and aircraft movements to minimize the impact of wind shifts on airport capacity. This paper describes the gust-front detection and wind-shift algorithms to be fielded in the initial TDWR systems. Results of a quantitative performance evaluation using Doppler radar data collected during TDWR operational demonstrations at the Denver, Kansas City, and Orlando airports are presented. The algorithms were found to be operationally useful by the FAA airport controllers and supervisors.
Huerga, Helena; Shiferie, Fisseha; Grebe, Eduard; Giuliani, Ruggero; Farhat, Jihane Ben; Van-Cutsem, Gilles; Cohen, Karen
Accurately identifying individuals who are on antiretroviral therapy (ART) is important to determine ART coverage and proportion on ART who are virally suppressed. ART is also included in recent infection testing algorithms used to estimate incidence. We compared estimates of ART coverage, viral load suppression rates and HIV incidence using ART self-report and detection of antiretroviral (ARV) drugs and we identified factors associated with discordance between the methods. Cross-sectional population-based survey in KwaZulu-Natal, South Africa. Individuals 15-59 years were eligible. Interviews included questions about ARV use. Rapid HIV testing was performed at the participants' home. Blood specimens were collected for ARV detection, LAg-Avidity HIV incidence testing and viral load quantification in HIV-positive individuals. Multivariate logistic regression models were used to identify socio-demographic covariates associated with discordance between self-reported ART and ARV detection. Of the 5649 individuals surveyed, 1423 were HIV-positive. Median age was 34 years and 76.3% were women. ART coverage was estimated at 51.4% (95%CI:48.5-54.3), 53.1% (95%CI:50.2-55.9) and 56.1% (95%CI:53.5-58.8) using self-reported ART, ARV detection and both methods combined (classified as ART exposed if ARV detected and/or ART reported) respectively. ART coverage estimates using the 3 methods were fairly similar within sex and age categories except in individuals aged 15-19 years: 33.3% (95%CI:23.3-45.2), 33.8% (95%CI:23.9-45.4%) and 44.3% (95%CI:39.3-46.7) using self-reported ART, ARV detection and both methods combined. Viral suppression below 1000cp/mL in individuals on ART was estimated at 89.8% (95%CI:87.3-91.9), 93.1% (95%CI:91.0-94.8) and 88.7% (95%CI:86.2-90.7) using self-reported ART, ARV detection and both methods combined respectively. HIV incidence was estimated at 1.4 (95%CI:0.8-2.0) new cases/100 person-years when employing no measure of ARV use, 1.1/100PY (95%CI:0