WorldWideScience

Sample records for bathymetric features detected

  1. Variability In Long-Wave Runup as a Function of Nearshore Bathymetric Features

    Energy Technology Data Exchange (ETDEWEB)

    Dunkin, Lauren McNeill [Texas A & M Univ., College Station, TX (United States)

    2010-05-01

    Beaches and barrier islands are vulnerable to extreme storm events, such as hurricanes, that can cause severe erosion and overwash to the system. Having dunes and a wide beach in front of coastal infrastructure can provide protection during a storm, but the influence that nearshore bathymetric features have in protecting the beach and barrier island system is not completely understood. The spatial variation in nearshore features, such as sand bars and beach cusps, can alter nearshore hydrodynamics, including wave setup and runup. The influence of bathymetric features on long-wave runup can be used in evaluating the vulnerability of coastal regions to erosion and dune overtopping, evaluating the changing morphology, and implementing plans to protect infrastructure. In this thesis, long-wave runup variation due to changing bathymetric features as determined with the numerical model XBeach is quantified (eXtreme Beach behavior model). Wave heights are analyzed to determine the energy through the surfzone. XBeach assumes that coastal erosion at the land-sea interface is dominated by bound long-wave processes. Several hydrodynamic conditions are used to force the numerical model. The XBeach simulation results suggest that bathymetric irregularity induces significant changes in the extreme long-wave runup at the beach and the energy indicator through the surfzone.

  2. Subducted bathymetric features linked to variations in earthquake apparent stress along the northern Japan Trench

    Science.gov (United States)

    Moyer, P. A.; Bilek, S. L.; Phillips, W. S.

    2010-12-01

    Ocean floor bathymetric features such as seamounts and ridges are thought to influence the earthquake rupture process when they enter the subduction zone by causing changes in frictional conditions along the megathrust contact between the subducting and overriding plates. Once subducted, these features have been described as localized areas of heterogeneous plate coupling, with some controversy over whether these features cause an increase or decrease in interplate coupling. Along the northern Japan Trench, a number of bathymetric features, such as horst and graben structures and seamounts, enter the subduction zone where they may vary earthquake behavior. Using seismic coda waves, scattered energy following the direct wave arrivals, we compute apparent stress (a measure of stress drop proportional to radiated seismic energy that has been tied to the strength of the fault interface contact) for 329 intermediate magnitude (3.2 earthquake spectra for path and site effects and compute apparent stress using the seismic moment and corner frequency determined from the spectra. Preliminary results indicate apparent stress values between 0.3 - 22.6 MPa for events over a depth range of 2 - 55 km, similar to those found in other studies of the region although within a different depth range, with variations both along-strike and downdip. Off the Sanriku Coast, horst and graben structures enter the Japan Trench in an area where a large number of earthquakes occur at shallow (< 30 km) depth. These shallow events have a mean apparent stress of 1.2 MPa (range 0.3 - 3.8 MPa) which is approximately 2 times lower then the mean apparent stress for other events along the northern portion of this margin in the same shallow depth range. The relatively low apparent stress for events related to subducting horst and graben structures suggests weak interplate coupling between the subducting and overriding plates due to small, irregular contact zones with these features at depth. This is in

  3. NOS Bathymetric Maps

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This collection of bathymetric contour maps which represent the seafloor topography includes over 400 individual titles and covers US offshore areas including Hawaii...

  4. Tumor detection using feature extraction

    International Nuclear Information System (INIS)

    Sankar, A.S.; Amudhavalli, N.; Sivakolundu, M.K.

    2008-01-01

    The assistance system for brain tumor detection helps the doctor to analyse the brain tumor in MRI image and help to make decision. The manual detection system takes 3 -5 hours time to analyse the tumor. Doctors are in a position to analyze the tumor faster and make a correct decision with an assistance system

  5. Features for detecting smoke in laparoscopic videos

    Directory of Open Access Journals (Sweden)

    Jalal Nour Aldeen

    2017-09-01

    Full Text Available Video-based smoke detection in laparoscopic surgery has different potential applications, such as the automatic addressing of surgical events associated with the electrocauterization task and the development of automatic smoke removal. In the literature, video-based smoke detection has been studied widely for fire surveillance systems. Nevertheless, the proposed methods are insufficient for smoke detection in laparoscopic videos because they often depend on assumptions which rarely hold in laparoscopic surgery such as static camera. In this paper, ten visual features based on motion, texture and colour of smoke are proposed and evaluated for smoke detection in laparoscopic videos. These features are RGB channels, energy-based feature, texture features based on gray level co-occurrence matrix (GLCM, HSV colour space feature, features based on the detection of moving regions using optical flow and the smoke colour in HSV colour space. These features were tested on four laparoscopic cholecystectomy videos. Experimental observations show that each feature can provide valuable information in performing the smoke detection task. However, each feature has weaknesses to detect the presence of smoke in some cases. By combining all proposed features smoke with high and even low density can be identified robustly and the classification accuracy increases significantly.

  6. Patch layout generation by detecting feature networks

    KAUST Repository

    Cao, Yuanhao

    2015-02-01

    The patch layout of 3D surfaces reveals the high-level geometric and topological structures. In this paper, we study the patch layout computation by detecting and enclosing feature loops on surfaces. We present a hybrid framework which combines several key ingredients, including feature detection, feature filtering, feature curve extension, patch subdivision and boundary smoothing. Our framework is able to compute patch layouts through concave features as previous approaches, but also able to generate nice layouts through smoothing regions. We demonstrate the effectiveness of our framework by comparing with the state-of-the-art methods.

  7. Integrating bathymetric and topographic data

    Science.gov (United States)

    Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat

    2017-11-01

    The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.

  8. Patch layout generation by detecting feature networks

    KAUST Repository

    Cao, Yuanhao; Yan, Dongming; Wonka, Peter

    2015-01-01

    The patch layout of 3D surfaces reveals the high-level geometric and topological structures. In this paper, we study the patch layout computation by detecting and enclosing feature loops on surfaces. We present a hybrid framework which combines

  9. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Nijholt, Antinus; Nijholt, A.; Pantic, M.; Pantic, Maja; Poel, Mannes; Poel, M.; Hondorp, G.H.W.

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved

  10. Fall Detection Using Smartphone Audio Features.

    Science.gov (United States)

    Cheffena, Michael

    2016-07-01

    An automated fall detection system based on smartphone audio features is developed. The spectrogram, mel frequency cepstral coefficents (MFCCs), linear predictive coding (LPC), and matching pursuit (MP) features of different fall and no-fall sound events are extracted from experimental data. Based on the extracted audio features, four different machine learning classifiers: k-nearest neighbor classifier (k-NN), support vector machine (SVM), least squares method (LSM), and artificial neural network (ANN) are investigated for distinguishing between fall and no-fall events. For each audio feature, the performance of each classifier in terms of sensitivity, specificity, accuracy, and computational complexity is evaluated. The best performance is achieved using spectrogram features with ANN classifier with sensitivity, specificity, and accuracy all above 98%. The classifier also has acceptable computational requirement for training and testing. The system is applicable in home environments where the phone is placed in the vicinity of the user.

  11. Lake Bathymetric Aquatic Vegetation

    Data.gov (United States)

    Minnesota Department of Natural Resources — Aquatic vegetation represented as polygon features, coded with vegetation type (emergent, submergent, etc.) and field survey date. Polygons were digitized from...

  12. A new approach for detecting local features

    DEFF Research Database (Denmark)

    Nguyen, Phuong Giang; Andersen, Hans Jørgen

    2010-01-01

    Local features up to now are often mentioned in the meaning of interest points. A patch around each point is formed to compute descriptors or feature vectors. Therefore, in order to satisfy different invariant imaging conditions such as scales and viewpoints, an input image is often represented i...

  13. Using Polarization features of visible light for automatic landmine detection

    NARCIS (Netherlands)

    Jong, W. de; Schavemaker, J.G.M.

    2007-01-01

    This chapter describes the usage of polarization features of visible light for automatic landmine detection. The first section gives an introduction to land-mine detection and the usage of camera systems. In section 2 detection concepts and methods that use polarization features are described.

  14. EOG feature relevance determination for microsleep detection

    Directory of Open Access Journals (Sweden)

    Golz Martin

    2017-09-01

    Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 – 4.9 % and 1.9 – 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 – 0.006 % and 0.002 – 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respectively. GRLVQ permits objective feature reduction by inclusion of all processing stages, but is not as accurate as SVM.

  15. EOG feature relevance determination for microsleep detection

    Directory of Open Access Journals (Sweden)

    Golz Martin

    2017-09-01

    Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 - 4.9 % and 1.9 - 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 - 0.006 % and 0.002 - 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respec-tively. GRLVQ permits objective feature reduction by inclu-sion of all processing stages, but is not as accurate as SVM.

  16. Breast Cancer Detection with Reduced Feature Set

    Directory of Open Access Journals (Sweden)

    Ahmet Mert

    2015-01-01

    Full Text Available This paper explores feature reduction properties of independent component analysis (ICA on breast cancer decision support system. Wisconsin diagnostic breast cancer (WDBC dataset is reduced to one-dimensional feature vector computing an independent component (IC. The original data with 30 features and reduced one feature (IC are used to evaluate diagnostic accuracy of the classifiers such as k-nearest neighbor (k-NN, artificial neural network (ANN, radial basis function neural network (RBFNN, and support vector machine (SVM. The comparison of the proposed classification using the IC with original feature set is also tested on different validation (5/10-fold cross-validations and partitioning (20%–40% methods. These classifiers are evaluated how to effectively categorize tumors as benign and malignant in terms of specificity, sensitivity, accuracy, F-score, Youden’s index, discriminant power, and the receiver operating characteristic (ROC curve with its criterion values including area under curve (AUC and 95% confidential interval (CI. This represents an improvement in diagnostic decision support system, while reducing computational complexity.

  17. Detection of Fraudulent Emails by Employing Advanced Feature Abundance

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Glasdam, Mathies

    2014-01-01

    In this paper, we present a fraudulent email detection model using advanced feature choice. We extracted various kinds of features and compared the performance of each category of features with the others in terms of the fraudulent email detection rate. The different types of features...... are incorporated step by step. The detection of fraudulent email has been considered as a classification problem and it is evaluated using various state-of-the art algorithms and on CCM [1] which is authors' previous cluster based classification model. The experiments have been performed on diverse feature sets...... and the different classification methods. The comparison of the results is also presented and the evaluations shows that for the fraudulent email detection tasks, the feature set is more important regardless of classification method. The results of the study suggest that the task of fraudulent emails detection...

  18. EOG feature relevance determination for microsleep detection

    OpenAIRE

    Golz Martin; Wollner Sebastian; Sommer David; Schnieder Sebastian

    2017-01-01

    Automatic relevance determination (ARD) was applied to two-channel EOG recordings for microsleep event (MSE) recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC) and logarithmic power spectral densities (PSD) averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ) was used as ARD...

  19. Degree of anisotropy as an automated indicator of rip channels in high resolution bathymetric models

    Science.gov (United States)

    Trimble, S. M.; Houser, C.; Bishop, M. P.

    2017-12-01

    A rip current is a concentrated seaward flow of water that forms in the surf zone of a beach as a result of alongshore variations in wave breaking. Rips can carry swimmers swiftly into deep water, and they are responsible for hundreds of fatal drownings and thousands of rescues worldwide each year. These currents form regularly alongside hard structures like piers and jetties, and can also form along sandy coasts when there is a three dimensional bar morphology. This latter rip type tends to be variable in strength and location, making them arguably the most dangerous to swimmers and most difficult to identify. These currents form in characteristic rip channels in surf zone bathymetry, in which the primary axis of self-similarity is oriented shore-normal. This paper demonstrates a new method for automating identification of such rip channels in bathymetric digital surface models (DSMs) using bathymetric data collected by various remote sensing methods. Degree of anisotropy is used to detect rip channels and distinguishes between sandbars, rip channels, and other beach features. This has implications for coastal geomorphology theory and safety practices. As technological advances increase access and accuracy of topobathy mapping methods in the surf zone, frequent nearshore bathymetric DSMs could be more easily captured and processed, then analyzed with this method to result in localized, automated, and frequent detection of rip channels. This could ultimately reduce rip-related fatalities worldwide (i) in present mitigation, by identifying the present location of rip channels, (ii) in forecasting, by tracking the channel's evolution through multiple DSMs, and (iii) in rip education by improving local lifeguard knowledge of the rip hazard. Although this paper on applies analysis of degree of anisotropy to the identification of rip channels, this parameter can be applied to multiple facets of barrier island morphological analysis.

  20. Space moving target detection using time domain feature

    Science.gov (United States)

    Wang, Min; Chen, Jin-yong; Gao, Feng; Zhao, Jin-yu

    2018-01-01

    The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space moving target detection method based on time domain features. We firstly construct the time spectral data of star map, then analyze the time domain features of the main objects (target, stars and the background) in star maps, finally detect the moving targets using single pulse feature of the time domain signal. The real star map target detection experimental results show that the proposed method can effectively detect the trajectory of moving targets in the star map sequence, and the detection probability achieves 99% when the false alarm rate is about 8×10-5, which outperforms those of compared algorithms.

  1. Ocean Striations Detecting and Its Features

    Science.gov (United States)

    Guan, Y. P.; Zhang, Y.; Chen, Z.; Liu, H.; Yu, Y.; Huang, R. X.

    2016-02-01

    Over the past 10 years or so, ocean striations has been one of the research frontiers as reported in many investigators. With suitable filtering subroutines, striations can be revealed from many different types of ocean datasets. It is clear that striations are some types of meso-scale phenomena in the large-scale circulation system, which in the form of alternating band-like structure. We present a comprehensive study on the effectiveness of the different detection approaches to unveiling the striations. Three one-dimensional filtering methods: Gaussian smoothing, Hanning and Chebyshev high-pass filtering. Our results show that all three methods can reveal ocean banded structures, but the Chebyshev filtering is the best choice. The Gaussian smoothing is not a high pass filter, and it can merely bring regional striations, such as those in the Eastern Pacific, to light. The Hanning high pass filter can introduce a northward shifting of stripes, so it is not as good as the Chebyshev filter. On the other hand, striations in the open ocean are mostly zonally oriented; however, there are always exceptions. In particular, in coastal ocean, due to topography constraint and along shore currents, striations can titled in the meridional direction. We examined the band-like structure of striation for some selected regions of the open ocean and the semi-closed sub-basins, such as the South China sea, the Gulf of Mexico, the Mediterranean Sea and the Japan Sea. A reasonable interpretation is given here.

  2. Feature Detection Systems Enhance Satellite Imagery

    Science.gov (United States)

    2009-01-01

    -resolution satellites, which provide the benefit of images detailed enough to reveal large features like highways while still broad enough for global coverage, continue to scan the entirety of the Earth s surface. In 2012, NASA plans to launch the Landsat Data Continuity Mission (LDCM), or Landsat 8, to extend the Landsat program s contributions to cartography, water management, natural disaster relief planning, and more.

  3. Adapting Local Features for Face Detection in Thermal Image

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2017-11-01

    Full Text Available A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses. We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  4. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    OpenAIRE

    ÖZEL, Selma Ayşe; SARAÇ, Esra

    2016-01-01

    Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the exper...

  5. Detection of fraudulent emails by employing advanced feature abundance

    Directory of Open Access Journals (Sweden)

    Sarwat Nizamani

    2014-11-01

    Full Text Available In this paper, we present a fraudulent email detection model using advanced feature choice. We extracted various kinds of features and compared the performance of each category of features with the others in terms of the fraudulent email detection rate. The different types of features are incorporated step by step. The detection of fraudulent email has been considered as a classification problem and it is evaluated using various state-of-the art algorithms and on CCM (Nizamani et al., 2011 [1] which is authors’ previous cluster based classification model. The experiments have been performed on diverse feature sets and the different classification methods. The comparison of the results is also presented and the evaluation show that for the fraudulent email detection tasks, the feature set is more important regardless of classification method. The results of the study suggest that the task of fraudulent emails detection requires the better choice of feature set; while the choice of classification method is of less importance.

  6. Relevant test set using feature selection algorithm for early detection ...

    African Journals Online (AJOL)

    The objective of feature selection is to find the most relevant features for classification. Thus, the dimensionality of the information will be reduced and may improve classification's accuracy. This paper proposed a minimum set of relevant questions that can be used for early detection of dyslexia. In this research, we ...

  7. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  8. Mariana Trench Bathymetric Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) created a bathymetric digital elevation model (DEM) for the Mariana Trench and adjacent seafloor in the Western...

  9. Hybrid feature selection for supporting lightweight intrusion detection systems

    Science.gov (United States)

    Song, Jianglong; Zhao, Wentao; Liu, Qiang; Wang, Xin

    2017-08-01

    Redundant and irrelevant features not only cause high resource consumption but also degrade the performance of Intrusion Detection Systems (IDS), especially when coping with big data. These features slow down the process of training and testing in network traffic classification. Therefore, a hybrid feature selection approach in combination with wrapper and filter selection is designed in this paper to build a lightweight intrusion detection system. Two main phases are involved in this method. The first phase conducts a preliminary search for an optimal subset of features, in which the chi-square feature selection is utilized. The selected set of features from the previous phase is further refined in the second phase in a wrapper manner, in which the Random Forest(RF) is used to guide the selection process and retain an optimized set of features. After that, we build an RF-based detection model and make a fair comparison with other approaches. The experimental results on NSL-KDD datasets show that our approach results are in higher detection accuracy as well as faster training and testing processes.

  10. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  11. Boosting instance prototypes to detect local dermoscopic features.

    Science.gov (United States)

    Situ, Ning; Yuan, Xiaojing; Zouridakis, George

    2010-01-01

    Local dermoscopic features are useful in many dermoscopic criteria for skin cancer detection. We address the problem of detecting local dermoscopic features from epiluminescence (ELM) microscopy skin lesion images. We formulate the recognition of local dermoscopic features as a multi-instance learning (MIL) problem. We employ the method of diverse density (DD) and evidence confidence (EC) function to convert MIL to a single-instance learning (SIL) problem. We apply Adaboost to improve the classification performance with support vector machines (SVMs) as the base classifier. We also propose to boost the selection of instance prototypes through changing the data weights in the DD function. We validate the methods on detecting ten local dermoscopic features from a dataset with 360 images. We compare the performance of the MIL approach, its boosting version, and a baseline method without using MIL. Our results show that boosting can provide performance improvement compared to the other two methods.

  12. Massachusetts Bay - Internal wave packets digitized from SAR imagery and intersected with a bathymetrically derived slope surface

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This feature class contains internal wave packets digitized from SAR imagery and intersected with a bathymetrically derived slope surface for Massachusetts Bay. The...

  13. Improving mass candidate detection in mammograms via feature maxima propagation and local feature selection.

    Science.gov (United States)

    Melendez, Jaime; Sánchez, Clara I; van Ginneken, Bram; Karssemeijer, Nico

    2014-08-01

    Mass candidate detection is a crucial component of multistep computer-aided detection (CAD) systems. It is usually performed by combining several local features by means of a classifier. When these features are processed on a per-image-location basis (e.g., for each pixel), mismatching problems may arise while constructing feature vectors for classification, which is especially true when the behavior expected from the evaluated features is a peaked response due to the presence of a mass. In this study, two of these problems, consisting of maxima misalignment and differences of maxima spread, are identified and two solutions are proposed. The first proposed method, feature maxima propagation, reproduces feature maxima through their neighboring locations. The second method, local feature selection, combines different subsets of features for different feature vectors associated with image locations. Both methods are applied independently and together. The proposed methods are included in a mammogram-based CAD system intended for mass detection in screening. Experiments are carried out with a database of 382 digital cases. Sensitivity is assessed at two sets of operating points. The first one is the interval of 3.5-15 false positives per image (FPs/image), which is typical for mass candidate detection. The second one is 1 FP/image, which allows to estimate the quality of the mass candidate detector's output for use in subsequent steps of the CAD system. The best results are obtained when the proposed methods are applied together. In that case, the mean sensitivity in the interval of 3.5-15 FPs/image significantly increases from 0.926 to 0.958 (p < 0.0002). At the lower rate of 1 FP/image, the mean sensitivity improves from 0.628 to 0.734 (p < 0.0002). Given the improved detection performance, the authors believe that the strategies proposed in this paper can render mass candidate detection approaches based on image location classification more robust to feature

  14. Prostate cancer detection: Fusion of cytological and textural features

    Directory of Open Access Journals (Sweden)

    Kien Nguyen

    2011-01-01

    Full Text Available A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i to locate cancer regions in a large digitized tissue biopsy, and (ii to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000x7,000 pixels and 1 1 whole-slide test images (each of which has approximately 5,000x23,000 pixels. All images are at 20X magnification.

  15. Prostate cancer detection: Fusion of cytological and textural features.

    Science.gov (United States)

    Nguyen, Kien; Jain, Anil K; Sabata, Bikash

    2011-01-01

    A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification.

  16. Object detection based on improved color and scale invariant features

    Science.gov (United States)

    Chen, Mengyang; Men, Aidong; Fan, Peng; Yang, Bo

    2009-10-01

    A novel object detection method which combines color and scale invariant features is presented in this paper. The detection system mainly adopts the widely used framework of SIFT (Scale Invariant Feature Transform), which consists of both a keypoint detector and descriptor. Although SIFT has some impressive advantages, it is not only computationally expensive, but also vulnerable to color images. To overcome these drawbacks, we employ the local color kernel histograms and Haar Wavelet Responses to enhance the descriptor's distinctiveness and computational efficiency. Extensive experimental evaluations show that the method has better robustness and lower computation costs.

  17. Hemorrhage detection in MRI brain images using images features

    Science.gov (United States)

    Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela

    2013-11-01

    The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.

  18. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  19. Computed Tomography Features of Incidentally Detected Diffuse Thyroid Disease

    Directory of Open Access Journals (Sweden)

    Myung Ho Rho

    2014-01-01

    Full Text Available Objective. This study aimed to evaluate the CT features of incidentally detected DTD in the patients who underwent thyroidectomy and to assess the diagnostic accuracy of CT diagnosis. Methods. We enrolled 209 consecutive patients who received preoperative neck CT and subsequent thyroid surgery. Neck CT in each case was retrospectively investigated by a single radiologist. We evaluated the diagnostic accuracy of individual CT features and the cut-off CT criteria for detecting DTD by comparing the CT features with histopathological results. Results. Histopathological examination of the 209 cases revealed normal thyroid (n=157, Hashimoto thyroiditis (n=17, non-Hashimoto lymphocytic thyroiditis (n=34, and diffuse hyperplasia (n=1. The CT features suggestive of DTD included low attenuation, inhomogeneous attenuation, increased glandular size, lobulated margin, and inhomogeneous enhancement. ROC curve analysis revealed that CT diagnosis of DTD based on the CT classification of “3 or more” abnormal CT features was superior. When the “3 or more” CT classification was selected, the sensitivity, specificity, positive and negative predictive values, and accuracy of CT diagnosis for DTD were 55.8%, 95.5%, 80.6%, 86.7%, and 85.6%, respectively. Conclusion. Neck CT may be helpful for the detection of incidental DTD.

  20. Logic based feature detection on incore neutron spectra

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Kiss, S.; Bende-Farkas, S. (Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics)

    1993-04-01

    A general framework for detecting features of incore neutron spectra with a rule-based methodology is presented. As an example, we determine the meaningful peaks in the APSD-s. This work is part of a larger project, aimed at developing a noise diagnostic expert system. (Author).

  1. 2011 NOAA Bathymetric Lidar: U.S. Virgin Islands - St. Thomas, St. John, St. Croix (Salt River Bay, Buck Island)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data represents a LiDAR (Light Detection & Ranging) gridded bathymetric surface and a gridded relative seafloor reflectivity surface (incorporated into the...

  2. Gamelan Music Onset Detection based on Spectral Features

    Directory of Open Access Journals (Sweden)

    Yoyon Kusnendar Suprapto

    2013-03-01

    Full Text Available This research detects onsets of percussive instruments by examining the performance on the sound signals of gamelan instruments as one of traditional music instruments in Indonesia. Onset plays important role in determining musical rythmic structure, like beat, tempo, and is highly required in many applications of music information retrieval. There are four onset detection methods compared that employ spectral features, such as magnitude, phase, and the combination of both, which are phase slope (PS, weighted phase deviation (WPD, spectral flux (SF, and rectified complex domain (RCD. These features are extracted by representing the sound signals into time-frequency domain using overlapped Short-time Fourier Transform (STFT and varying the window length. Onset detection functions are processed through peak-picking using dynamic threshold. The results showed that by using suitable window length and parameter setting of dynamic threshold, F-measure which is greater than 0.80 can be obtained for certain methods.

  3. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  4. Face detection and facial feature localization using notch based templates

    International Nuclear Information System (INIS)

    Qayyum, U.

    2007-01-01

    We present a real time detection off aces from the video with facial feature localization as well as the algorithm capable of differentiating between the face/non-face patterns. The need of face detection and facial feature localization arises in various application of computer vision, so a lot of research is dedicated to come up with a real time solution. The algorithm should remain simple to perform real time whereas it should not compromise on the challenges encountered during the detection and localization phase, keeping simplicity and all challenges i.e. algorithm invariant to scale, translation, and (+-45) rotation transformations. The proposed system contains two parts. Visual guidance and face/non-face classification. The visual guidance phase uses the fusion of motion and color cues to classify skin color. Morphological operation with union-structure component labeling algorithm extracts contiguous regions. Scale normalization is applied by nearest neighbor interpolation method to avoid the effect of different scales. Using the aspect ratio of width and height size. Region of Interest (ROI) is obtained and then passed to face/non-face classifier. Notch (Gaussian) based templates/ filters are used to find circular darker regions in ROI. The classified face region is handed over to facial feature localization phase, which uses YCbCr eyes/lips mask for face feature localization. The empirical results show an accuracy of 90% for five different videos with 1000 face/non-face patterns and processing rate of proposed algorithm is 15 frames/sec. (author)

  5. Cloud Detection by Fusing Multi-Scale Convolutional Features

    Science.gov (United States)

    Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang

    2018-04-01

    Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.

  6. Optical detection of random features for high security applications

    Science.gov (United States)

    Haist, T.; Tiziani, H. J.

    1998-02-01

    Optical detection of random features in combination with digital signatures based on public key codes in order to recognize counterfeit objects will be discussed. Without applying expensive production techniques objects are protected against counterfeiting. Verification is done off-line by optical means without a central authority. The method is applied for protecting banknotes. Experimental results for this application are presented. The method is also applicable for identity verification of a credit- or chip-card holder.

  7. Asymmetry features for classification of thermograms in breast cancer detection

    Science.gov (United States)

    Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz

    2016-09-01

    The computer system for an automatic interpretation of thermographic pictures created by the Br-aster devices uses image processing and machine learning algorithms. The huge set of attributes analyzed by this software includes the asymmetry measurements between corresponding images, and these features are analyzed in presented paper. The system was tested on real data and achieves accuracy comparable to other popular techniques used for breast tumour detection.

  8. Quantification of storm-induced bathymetric change in a back-barrier estuary

    Science.gov (United States)

    Ganju, Neil K.; Suttles, Steven E.; Beudin, Alexis; Nowacki, Daniel J.; Miselis, Jennifer L.; Andrews, Brian D.

    2017-01-01

    Geomorphology is a fundamental control on ecological and economic function of estuaries. However, relative to open coasts, there has been little quantification of storm-induced bathymetric change in back-barrier estuaries. Vessel-based and airborne bathymetric mapping can cover large areas quickly, but change detection is difficult because measurement errors can be larger than the actual changes over the storm timescale. We quantified storm-induced bathymetric changes at several locations in Chincoteague Bay, Maryland/Virginia, over the August 2014 to July 2015 period using fixed, downward-looking altimeters and numerical modeling. At sand-dominated shoal sites, measurements showed storm-induced changes on the order of 5 cm, with variability related to stress magnitude and wind direction. Numerical modeling indicates that the predominantly northeasterly wind direction in the fall and winter promotes southwest-directed sediment transport, causing erosion of the northern face of sandy shoals; southwesterly winds in the spring and summer lead to the opposite trend. Our results suggest that storm-induced estuarine bathymetric change magnitudes are often smaller than those detectable with methods such as LiDAR. More precise fixed-sensor methods have the ability to elucidate the geomorphic processes responsible for modulating estuarine bathymetry on the event and seasonal timescale, but are limited spatially. Numerical modeling enables interpretation of broad-scale geomorphic processes and can be used to infer the long-term trajectory of estuarine bathymetric change due to episodic events, when informed by fixed-sensor methods.

  9. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  10. Multispectral image feature fusion for detecting land mines

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G.A.; Fields, D.J.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-11-15

    Our system fuses information contained in registered images from multiple sensors to reduce the effect of clutter and improve the the ability to detect surface and buried land mines. The sensor suite currently consists if a camera that acquires images in sixible wavelength bands, du, dual-band infrared (5 micron and 10 micron) and ground penetrating radar. Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a variety of physical properties that are more separate in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, holes made by animals and natural processes, etc.) and some artifacts.

  11. Non-contact feature detection using ultrasonic Lamb waves

    Science.gov (United States)

    Sinha, Dipen N [Los Alamos, NM

    2011-06-28

    Apparatus and method for non-contact ultrasonic detection of features on or within the walls of hollow pipes are described. An air-coupled, high-power ultrasonic transducer for generating guided waves in the pipe wall, and a high-sensitivity, air-coupled transducer for detecting these waves, are disposed at a distance apart and at chosen angle with respect to the surface of the pipe, either inside of or outside of the pipe. Measurements may be made in reflection or transmission modes depending on the relative position of the transducers and the pipe. Data are taken by sweeping the frequency of the incident ultrasonic waves, using a tracking narrow-band filter to reduce detected noise, and transforming the frequency domain data into the time domain using fast Fourier transformation, if required.

  12. Breast Cancer Detection with Gabor Features from Digital Mammograms

    Directory of Open Access Journals (Sweden)

    Yufeng Zheng

    2010-01-01

    Full Text Available A new breast cancer detection algorithm, named the “Gabor Cancer Detection” (GCD algorithm, utilizing Gabor features is proposed. Three major steps are involved in the GCD algorithm, preprocessing, segmentation (generating alarm segments, and classification (reducing false alarms. In preprocessing, a digital mammogram is down-sampled, quantized, denoised and enhanced. Nonlinear diffusion is used for noise suppression. In segmentation, a band-pass filter is formed by rotating a 1-D Gaussian filter (off center in frequency space, termed as “Circular Gaussian Filter” (CGF. A CGF can be uniquely characterized by specifying a central frequency and a frequency band. A mass or calcification is a space-occupying lesion and usually appears as a bright region on a mammogram. The alarm segments (suspicious to be masses/calcifications can be extracted out using a threshold that is adaptively decided upon the histogram analysis of the CGF-filtered mammogram. In classification, a Gabor filter bank is formed with five bands by four orientations (horizontal, vertical, 45 and 135 degree in Fourier frequency domain. For each mammographic image, twenty Gabor-filtered images are produced. A set of edge histogram descriptors (EHD are then extracted from 20 Gabor images for classification. An EHD signature is computed with four orientations of Gabor images along each band and five EHD signatures are then joined together to form an EHD feature vector of 20 dimensions. With the EHD features, the fuzzy C-means clustering technique and k-nearest neighbor (KNN classifier are used to reduce the number of false alarms. The experimental results tested on the DDSM database (University of South Florida show the promises of GCD algorithm in breast cancer detection, which achieved TP (true positive rate = 90% at FPI (false positives per image = 1.21 in mass detection; and TP = 93% at FPI = 1.19 in calcification detection.

  13. HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-02-01

    Full Text Available 3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  14. System for Detecting Vehicle Features from Low Quality Data

    Directory of Open Access Journals (Sweden)

    Marcin Dominik Bugdol

    2018-02-01

    Full Text Available The paper presents a system that recognizes the make, colour and type of the vehicle. The classification has been performed using low quality data from real-traffic measurement devices. For detecting vehicles’ specific features three methods have been developed. They employ several image and signal recognition techniques, e.g. Mamdani Fuzzy Inference System for colour recognition or Scale Invariant Features Transform for make identification. The obtained results are very promising, especially because only on-site equipment, not dedicated for such application, has been employed. In case of car type, the proposed system has better performance than commonly used inductive loops. Extensive information about the vehicle can be used in many fields of Intelligent Transport Systems, especially for traffic supervision.

  15. Logic based feature detection on incore neutron spectra

    International Nuclear Information System (INIS)

    Bende-Farkas, S.; Kiss, S.; Racz, A.

    1992-09-01

    A methodology is proposed to investigate neutron spectra in such a way which is similar to human thinking. The goal was to save experts from tedious, mechanical tasks of browsing a large amount of signals in order to recognize changes in the underlying mechanisms. The general framework for detecting features of incore neutron spectra with a rulebased methodology is presented. As an example, the meaningful peaks in the APSDs are determined. This method is a part of a wider project to develop a noise diagnostic expert system. (R.P.) 6 refs.; 6 figs.; 1 tab

  16. Inverted dipole feature in directional detection of exothermic dark matter

    International Nuclear Information System (INIS)

    Bozorgnia, Nassim; Gelmini, Graciela B.; Gondolo, Paolo

    2017-01-01

    Directional dark matter detection attempts to measure the direction of motion of nuclei recoiling after having interacted with dark matter particles in the halo of our Galaxy. Due to Earth's motion with respect to the Galaxy, the dark matter flux is concentrated around a preferential direction. An anisotropy in the recoil direction rate is expected as an unmistakable signature of dark matter. The average nuclear recoil direction is expected to coincide with the average direction of dark matter particles arriving to Earth. Here we point out that for a particular type of dark matter, inelastic exothermic dark matter, the mean recoil direction as well as a secondary feature, a ring of maximum recoil rate around the mean recoil direction, could instead be opposite to the average dark matter arrival direction. Thus, the detection of an average nuclear recoil direction opposite to the usually expected direction would constitute a spectacular experimental confirmation of this type of dark matter.

  17. Camouflaged target detection based on polarized spectral features

    Science.gov (United States)

    Tan, Jian; Zhang, Junping; Zou, Bin

    2016-05-01

    The polarized hyperspectral images (PHSI) include polarization, spectral, spatial and radiant features, which provide more information about objects and scenes than traditional intensity or spectrum ones. And polarization can suppress the background and highlight the object, leading to the high potential to improve camouflaged target detection. So polarized hyperspectral imaging technique has aroused extensive concern in the last few years. Nowadays, the detection methods are still not very mature, most of which are rooted in the detection of hyperspectral image. And before using these algorithms, Stokes vector is used to process the original four-dimensional polarized hyperspectral data firstly. However, when the data is large and complex, the amount of calculation and error will increase. In this paper, tensor is applied to reconstruct the original four-dimensional data into new three-dimensional data, then, the constraint energy minimization (CEM) is used to process the new data, which adds the polarization information to construct the polarized spectral filter operator and takes full advantages of spectral and polarized information. This way deals with the original data without extracting the Stokes vector, so as to reduce the computation and error greatly. The experimental results also show that the proposed method in this paper is more suitable for the target detection of the PHSI.

  18. Feature-fused SSD: fast detection for small objects

    Science.gov (United States)

    Cao, Guimei; Xie, Xuemei; Yang, Wenzhe; Liao, Quan; Shi, Guangming; Wu, Jinjian

    2018-04-01

    Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.

  19. Detection and analysis of diamond fingerprinting feature and its application

    Energy Technology Data Exchange (ETDEWEB)

    Li Xin; Huang Guoliang; Li Qiang; Chen Shengyi, E-mail: tshgl@tsinghua.edu.cn [Department of Biomedical Engineering, the School of Medicine, Tsinghua University, Beijing, 100084 (China)

    2011-01-01

    Before becoming a jewelry diamonds need to be carved artistically with some special geometric features as the structure of the polyhedron. There are subtle differences in the structure of this polyhedron in each diamond. With the spatial frequency spectrum analysis of diamond surface structure, we can obtain the diamond fingerprint information which represents the 'Diamond ID' and has good specificity. Based on the optical Fourier Transform spatial spectrum analysis, the fingerprinting identification of surface structure of diamond in spatial frequency domain was studied in this paper. We constructed both the completely coherent diamond fingerprinting detection system illuminated by laser and the partially coherent diamond fingerprinting detection system illuminated by led, and analyzed the effect of the coherence of light source to the diamond fingerprinting feature. We studied rotation invariance and translation invariance of the diamond fingerprinting and verified the feasibility of real-time and accurate identification of diamond fingerprint. With the profit of this work, we can provide customs, jewelers and consumers with a real-time and reliable diamonds identification instrument, which will curb diamond smuggling, theft and other crimes, and ensure the healthy development of the diamond industry.

  20. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  1. Impact of bathymetric system advances on hydrography

    Digital Repository Service at National Institute of Oceanography (India)

    Ranade, G.

    undergone unprecedented changes with the advancement in the motion sensor technology. By late 1970 gyro stabilized accelerometer based attitude monitoring systems, computing roll, pitch and heave sensing and came in to existence. Doppler sonar principle...). “Multibeam bathymetric sonar: Sea Beam and Hydrochart”, Mar. Geod., vol. 4, pp.77-93. 4. Urick, R. (1975), “Principles of Underwater Acoustics”, McGraw – Hill. 5. Christian de Moustier, C. (1987). “Online Sea Beam acoustic imaging, Proc. Oceans ‘87...

  2. A general method for generating bathymetric data for hydrodynamic computer models

    Science.gov (United States)

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)

  3. Bathymetric survey and estimation of the water balance of Lake ...

    African Journals Online (AJOL)

    Quantification of the water balance components and bathymetric survey is very crucial for sustainable management of lake waters. This paper focuses on the bathymetry and the water balance of the crater Lake Ardibo, recently utilized for irrigation. The bathymetric map of the lake is established at a contour interval of 10 ...

  4. Bathymetric surveys of the Neosho River, Spring River, and Elk River, northeastern Oklahoma and southwestern Missouri, 2016–17

    Science.gov (United States)

    Hunter, Shelby L.; Ashworth, Chad E.; Smith, S. Jerrod

    2017-09-26

    In February 2017, the Grand River Dam Authority filed to relicense the Pensacola Hydroelectric Project with the Federal Energy Regulatory Commission. The predominant feature of the Pensacola Hydroelectric Project is Pensacola Dam, which impounds Grand Lake O’ the Cherokees (locally called Grand Lake) in northeastern Oklahoma. Identification of information gaps and assessment of project effects on stakeholders are central aspects of the Federal Energy Regulatory Commission relicensing process. Some upstream stakeholders have expressed concerns about the dynamics of sedimentation and flood flows in the transition zone between major rivers and Grand Lake O’ the Cherokees. To relicense the Pensacola Hydroelectric Project with the Federal Energy Regulatory Commission, the hydraulic models for these rivers require high-resolution bathymetric data along the river channels. In support of the Federal Energy Regulatory Commission relicensing process, the U.S. Geological Survey, in cooperation with the Grand River Dam Authority, performed bathymetric surveys of (1) the Neosho River from the Oklahoma border to the U.S. Highway 60 bridge at Twin Bridges State Park, (2) the Spring River from the Oklahoma border to the U.S. Highway 60 bridge at Twin Bridges State Park, and (3) the Elk River from Noel, Missouri, to the Oklahoma State Highway 10 bridge near Grove, Oklahoma. The Neosho River and Spring River bathymetric surveys were performed from October 26 to December 14, 2016; the Elk River bathymetric survey was performed from February 27 to March 21, 2017. Only areas inundated during those periods were surveyed.The bathymetric surveys covered a total distance of about 76 river miles and a total area of about 5 square miles. Greater than 1.4 million bathymetric-survey data points were used in the computation and interpolation of bathymetric-survey digital elevation models and derived contours at 1-foot (ft) intervals. The minimum bathymetric-survey elevation of the Neosho

  5. An object-oriented feature-based design system face-based detection of feature interactions

    International Nuclear Information System (INIS)

    Ariffin Abdul Razak

    1999-01-01

    This paper presents an object-oriented, feature-based design system which supports the integration of design and manufacture by ensuring that part descriptions fully account for any feature interactions. Manufacturing information is extracted from the feature descriptions in the form of volumes and Tool Access Directions, TADs. When features interact, both volumes and TADs are updated. This methodology has been demonstrated by developing a prototype system in which ACIS attributes are used to record feature information within the data structure of the solid model. The system implemented in the C++ programming language and embedded in a menu-driven X-windows user interface to the ACIS 3D Toolkit. (author)

  6. Synoptic channel morphodynamics with topo-bathymetric airborne lidar: promises, pitfalls and research needs

    Science.gov (United States)

    Lague, D.; Launeau, P.; Gouraud, E.

    2017-12-01

    Topo-bathymetric airborne lidar sensors using a green laser penetrating water and suitable for hydrography are now sold by major manufacturers. In the context of channel morphodynamics, repeat surveys could offer synoptic high resolution measurement of topo-bathymetric change, a key data that is currently missing. Yet, beyond the technological promise, what can we really achieve with these sensors in terms of depth penetration and bathymetric accuracy ? Can all rivers be surveyed ? How easy it is to process this new type of data to get the data needed by geomorphologists ? Here we report on the use of the Optech Titan dual wavelength (1064 nm & 532 nm) operated by the universities of Rennes and Nantes (France) and deployed over several rivers and lakes in France, including repeat surveys. We will illustrate cases where the topo-bathymetric survey is complete, reaching up to 6 m in rivers and offers unprecedented data for channel morphology analysis over tens of kilometres. We will also present challenging cases for which the technology will never work, or for which new algorithms to process full waveform are required. We will illustrate new developments for automated processing of large datasets, including the critical step of water surface detection and refraction correction. In suitable rivers, airborne topo-bathymetric surveys offer unprecedented synoptic 3D data at very high resolution (> 15 pts/m² in bathy) and precision (better than 10 cm for the bathy) down to 5-6 meters depth, with a perfectly continuous topography to bathymetry transition. This presentation will illustrate how this new type of data, when combined with 2D hydraulics modelling offers news insights into the spatial variations of friction in relation to channel bedforms, and the connectivity between rivers and floodplains.

  7. E/V Nautilus Detection of Isolated Features in the Eastern Pacific Ocean: Newly Discovered Calderas and Methane Seeps

    Science.gov (United States)

    Raineault, N.; Irish, O.; Lubetkin, M.

    2016-02-01

    The E/V Nautilus mapped over 80,000 km2 of the seafloor in the Gulf of Mexico and Eastern Pacific Ocean during its 2015 expedition. The Nautilus used its Kongsberg EM302 multibeam system to map the seafloor prior to remotely operated vehicle (ROV) dives, both for scientific purposes (site selection) and navigational safety. The Nautilus also routinely maps during transits to identify previously un-mapped or unresolved seafloor features. During its transit from the Galapagos Islands to the California Borderland, the Nautilus mapped 44,695 km2 of seafloor. Isolated features on the seafloor and in the water-column, such as calderas and methane seeps, were detected during this data collection effort. Operating at a frequency of 30 kHz in waters ranging from 1000-5500 m, we discovered caldera features off the coast of Central America. Since seamounts are known hotspots of biodiversity, locating new ones may enrich our understanding of seamounts as "stepping stones" for species distribution and ocean current pathways. Satellite altimetry datasets prior to this data either did not discern these calderas or recognized the presence of a bathymetric high without great detail. This new multibeam bathymetry data, gridded at 50 m, gives a precise look at these seamounts that range in elevation from 350 to 1400 m from abyssal depth. The largest of the calderas is circular in shape and is 10,000 m in length and 5,000 m in width, with a distinct circular depression at the center of its highest point, 1,400 m above the surrounding abyssal depth. In the California Borderland region, located between San Diego and Los Angeles, four new seeps were discovered in water depths from 400-1,020 m. ROV exploration of these seeps revealed vent communities. Altogether, these discoveries reinforce how little we know about the global ocean, indicate the presence of isolated deep-sea ecosystems that support biologically diverse communities, and will impact our understanding of seafloor habitat.

  8. Detection of Seed Methods for Quantification of Feature Confinement

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Bouwers, Eric; Jørgensen, Bo Nørregaard

    2012-01-01

    The way features are implemented in source code has a significant influence on multiple quality aspects of a software system. Hence, it is important to regularly evaluate the quality of feature confinement. Unfortunately, existing approaches to such measurement rely on expert judgement for tracin...

  9. Feature Detection, Characterization and Confirmation Methodology: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Karasaki, Kenzi; Apps, John; Doughty, Christine; Gwatney, Hope; Onishi, Celia Tiemi; Trautz, Robert; Tsang, Chin-Fu

    2007-03-01

    This is the final report of the NUMO-LBNL collaborative project: Feature Detection, Characterization and Confirmation Methodology under NUMO-DOE/LBNL collaboration agreement, the task description of which can be found in the Appendix. We examine site characterization projects from several sites in the world. The list includes Yucca Mountain in the USA, Tono and Horonobe in Japan, AECL in Canada, sites in Sweden, and Olkiluoto in Finland. We identify important geologic features and parameters common to most (or all) sites to provide useful information for future repository siting activity. At first glance, one could question whether there was any commonality among the sites, which are in different rock types at different locations. For example, the planned Yucca Mountain site is a dry repository in unsaturated tuff, whereas the Swedish sites are situated in saturated granite. However, the study concludes that indeed there are a number of important common features and parameters among all the sites--namely, (1) fault properties, (2) fracture-matrix interaction (3) groundwater flux, (4) boundary conditions, and (5) the permeability and porosity of the materials. We list the lessons learned from the Yucca Mountain Project and other site characterization programs. Most programs have by and large been quite successful. Nonetheless, there are definitely 'should-haves' and 'could-haves', or lessons to be learned, in all these programs. Although each site characterization program has some unique aspects, we believe that these crosscutting lessons can be very useful for future site investigations to be conducted in Japan. One of the most common lessons learned is that a repository program should allow for flexibility, in both schedule and approach. We examine field investigation technologies used to collect site characterization data in the field. An extensive list of existing field technologies is presented, with some discussion on usage and limitations

  10. Feature Detection, Characterization and Confirmation Methodology: Final Report

    International Nuclear Information System (INIS)

    Karasaki, Kenzi; Apps, John; Doughty, Christine; Gwatney, Hope; Onishi, Celia Tiemi; Trautz, Robert; Tsang, Chin-Fu

    2007-01-01

    This is the final report of the NUMO-LBNL collaborative project: Feature Detection, Characterization and Confirmation Methodology under NUMO-DOE/LBNL collaboration agreement, the task description of which can be found in the Appendix. We examine site characterization projects from several sites in the world. The list includes Yucca Mountain in the USA, Tono and Horonobe in Japan, AECL in Canada, sites in Sweden, and Olkiluoto in Finland. We identify important geologic features and parameters common to most (or all) sites to provide useful information for future repository siting activity. At first glance, one could question whether there was any commonality among the sites, which are in different rock types at different locations. For example, the planned Yucca Mountain site is a dry repository in unsaturated tuff, whereas the Swedish sites are situated in saturated granite. However, the study concludes that indeed there are a number of important common features and parameters among all the sites--namely, (1) fault properties, (2) fracture-matrix interaction (3) groundwater flux, (4) boundary conditions, and (5) the permeability and porosity of the materials. We list the lessons learned from the Yucca Mountain Project and other site characterization programs. Most programs have by and large been quite successful. Nonetheless, there are definitely 'should-haves' and 'could-haves', or lessons to be learned, in all these programs. Although each site characterization program has some unique aspects, we believe that these crosscutting lessons can be very useful for future site investigations to be conducted in Japan. One of the most common lessons learned is that a repository program should allow for flexibility, in both schedule and approach. We examine field investigation technologies used to collect site characterization data in the field. An extensive list of existing field technologies is presented, with some discussion on usage and limitations. Many of the

  11. Improving EEG signal peak detection using feature weight learning ...

    Indian Academy of Sciences (India)

    Asrul Adam

    4 School of Psychology and Counseling, Queensland University of Technology, Brisbane 4000, Australia. 5 QIMR ... The groups of Acir et al .... difference between the peak and the floating mean, which is ..... Thus, the individual features were.

  12. Numerical Analysis for Relevant Features in Intrusion Detection (NARFid)

    Science.gov (United States)

    2009-03-01

    Error and Average Correlation Coefficient. Mucciardi and Gose [63] discuss seven methods for selecting features. These methods seek to overcome the...POEmaxPOEmin). (2.37) With each iteration of selecting the next feature, ACC is also normalized in the same fashion. As stated by Mucciardi and Gose ...lan’s discussion [70] as described in Section 2.3.1. Mucciardi and Gose [63] provide the POEACC parameters that perform well in their experiments. As

  13. Degree of contribution (DoC) feature selection algorithm for structural brain MRI volumetric features in depression detection.

    Science.gov (United States)

    Kipli, Kuryati; Kouzani, Abbas Z

    2015-07-01

    Accurate detection of depression at an individual level using structural magnetic resonance imaging (sMRI) remains a challenge. Brain volumetric changes at a structural level appear to have importance in depression biomarkers studies. An automated algorithm is developed to select brain sMRI volumetric features for the detection of depression. A feature selection (FS) algorithm called degree of contribution (DoC) is developed for selection of sMRI volumetric features. This algorithm uses an ensemble approach to determine the degree of contribution in detection of major depressive disorder. The DoC is the score of feature importance used for feature ranking. The algorithm involves four stages: feature ranking, subset generation, subset evaluation, and DoC analysis. The performance of DoC is evaluated on the Duke University Multi-site Imaging Research in the Analysis of Depression sMRI dataset. The dataset consists of 115 brain sMRI scans of 88 healthy controls and 27 depressed subjects. Forty-four sMRI volumetric features are used in the evaluation. The DoC score of forty-four features was determined as the accuracy threshold (Acc_Thresh) was varied. The DoC performance was compared with that of four existing FS algorithms. At all defined Acc_Threshs, DoC outperformed the four examined FS algorithms for the average classification score and the maximum classification score. DoC has a good ability to generate reduced-size subsets of important features that could yield high classification accuracy. Based on the DoC score, the most discriminant volumetric features are those from the left-brain region.

  14. The effect of destination linked feature selection in real-time network intrusion detection

    CSIR Research Space (South Africa)

    Mzila, P

    2013-07-01

    Full Text Available techniques in the network intrusion detection system (NIDS) is the feature selection technique. The ability of NIDS to accurately identify intrusion from the network traffic relies heavily on feature selection, which describes the pattern of the network...

  15. Background area effects on feature detectability in CT and uncorrelated noise

    International Nuclear Information System (INIS)

    Swensson, R.G.; Judy, P.F.

    1987-01-01

    Receiver operating characteristic curve measures of feature detectability decrease substantially when the surrounding area of uniform-noise background is small relative to that of the feature itself. The effect occurs with both fixed and variable-level backgrounds, but differs in form for CT and uncorrelated noise. Cross-correlation image calculations can only predict these effects by treating feature detection as the discrimination of a local change (a ''feature'') from the estimated level of an assumed-uniform region of background

  16. Usage of polarisation features of landmines for improved automatic detection

    NARCIS (Netherlands)

    Jong, W. de; Cremer, F.; Schutte, K.; Storm, J.

    2000-01-01

    In this paper the landmine detection performance of an infrared and a visual light camera both equipped with a polarisation filter are compared with the detection performance of these cameras without polarisation filters. Sequences of images have been recorded with in front of these cameras a

  17. Retinal microaneurysms detection using local convergence index features

    NARCIS (Netherlands)

    Dashtbozorg, B.; Zhang, J.; Huang, F.; ter Haar Romeny, B.M.

    2018-01-01

    Retinal microaneurysms (MAs) are the earliest clinical sign of diabetic retinopathy disease. Detection of microaneurysms is crucial for the early diagnosis of diabetic retinopathy and prevention of blindness. In this paper, a novel and reliable method for automatic detection of microaneurysms in

  18. Retinal microaneurysms detection using local convergence index features

    NARCIS (Netherlands)

    Dasht Bozorg, B.; Zhang, J.; ter Haar Romeny, B.M.

    2017-01-01

    Retinal microaneurysms are the earliest clinical sign of diabetic retinopathy disease. Detection of microaneurysms is crucial for the early diagnosis of diabetic retinopathy and prevention of blindness. In this paper, a novel and reliable method for automatic detection of microaneurysms in retinal

  19. Processing and evaluation of riverine waveforms acquired by an experimental bathymetric LiDAR

    Science.gov (United States)

    Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.

    2010-12-01

    Accurate mapping of fluvial environments with airborne bathymetric LiDAR is challenged not only by environmental characteristics but also the development and application of software routines to post-process the recorded laser waveforms. During a bathymetric LiDAR survey, the transmission of the green-wavelength laser pulses through the water column is influenced by a number of factors including turbidity, the presence of organic material, and the reflectivity of the streambed. For backscattered laser pulses returned from the river bottom and digitized by the LiDAR detector, post-processing software is needed to interpret and identify distinct inflections in the reflected waveform. Relevant features of this energy signal include the air-water interface, volume reflection from the water column itself, and, ideally, a strong return from the bottom. We discuss our efforts to acquire, analyze, and interpret riverine surveys using the USGS Experimental Advanced Airborne Research LiDAR (EAARL) in a variety of fluvial environments. Initial processing of data collected in the Trinity River, California, using the EAARL Airborne Lidar Processing Software (ALPS) highlighted the difficulty of retrieving a distinct bottom signal in deep pools. Examination of laser waveforms from these pools indicated that weak bottom reflections were often neglected by a trailing edge algorithm used by ALPS to process shallow riverine waveforms. For the Trinity waveforms, this algorithm had a tendency to identify earlier inflections as the bottom, resulting in a shallow bias. Similarly, an EAARL survey along the upper Colorado River, Colorado, also revealed the inadequacy of the trailing edge algorithm for detecting weak bottom reflections. We developed an alternative waveform processing routine by exporting digitized laser waveforms from ALPS, computing the local extrema, and fitting Gaussian curves to the convolved backscatter. Our field data indicate that these techniques improved the

  20. Improving EEG signal peak detection using feature weight learning ...

    Indian Academy of Sciences (India)

    Therefore, we aimed to develop a general procedure for eye event-related applications based on feature weight learning (FWL), through the use of a neural network with random weights (NNRW) as the classifier. The FWL is performed using a particle swarm optimization algorithm, applied to the well-studied Dumpala, Acir, ...

  1. Microarray-based large scale detection of single feature ...

    Indian Academy of Sciences (India)

    2015-12-08

    Dec 8, 2015 ... mental stages was used to identify single feature polymorphisms (SFPs). ... on a high-density oligonucleotide expression array in which. ∗ ..... The sign (+/−) with SFPs indicates direction of polymorphism. In the. (−) sign (i.e. ...

  2. Automatic Detection of Sand Ripple Features in Sidescan Sonar Imagery

    Science.gov (United States)

    2014-07-09

    Among the features used in forensic scientific fingerprint analysis are terminations or bifurcations of print ridges. Sidescan sonar imagery of ripple...always be pathological cases. The size of the blocks of pixels used in determining the ripple wavelength is evident in the output images on the right in

  3. Detection of hypertensive retinopathy using vessel measurements and textural features.

    Science.gov (United States)

    Agurto, Carla; Joshi, Vinayak; Nemeth, Sheila; Soliz, Peter; Barriga, Simon

    2014-01-01

    Features that indicate hypertensive retinopathy have been well described in the medical literature. This paper presents a new system to automatically classify subjects with hypertensive retinopathy (HR) using digital color fundus images. Our method consists of the following steps: 1) normalization and enhancement of the image; 2) determination of regions of interest based on automatic location of the optic disc; 3) segmentation of the retinal vasculature and measurement of vessel width and tortuosity; 4) extraction of color features; 5) classification of vessel segments as arteries or veins; 6) calculation of artery-vein ratios using the six widest (major) vessels for each category; 7) calculation of mean red intensity and saturation values for all arteries; 8) calculation of amplitude-modulation frequency-modulation (AM-FM) features for entire image; and 9) classification of features into HR and non-HR using linear regression. This approach was tested on 74 digital color fundus photographs taken with TOPCON and CANON retinal cameras using leave-one out cross validation. An area under the ROC curve (AUC) of 0.84 was achieved with sensitivity and specificity of 90% and 67%, respectively.

  4. Problems of Software Detection of Periodic Features in a Time ...

    African Journals Online (AJOL)

    Problems arise when attempts are made to extract automatically, visually obvious periodic features indicative of defects in a vibration time series for diagnosis using computers. Such problems may be interpretational in nature arising either from insufficient knowledge of the mechanism, or the convolution of the source signal ...

  5. Detection of Abnormal Events via Optical Flow Feature Analysis

    Directory of Open Access Journals (Sweden)

    Tian Wang

    2015-03-01

    Full Text Available In this paper, a novel algorithm is proposed to detect abnormal events in video streams. The algorithm is based on the histogram of the optical flow orientation descriptor and the classification method. The details of the histogram of the optical flow orientation descriptor are illustrated for describing movement information of the global video frame or foreground frame. By combining one-class support vector machine and kernel principal component analysis methods, the abnormal events in the current frame can be detected after a learning period characterizing normal behaviors. The difference abnormal detection results are analyzed and explained. The proposed detection method is tested on benchmark datasets, then the experimental results show the effectiveness of the algorithm.

  6. Detection of Abnormal Events via Optical Flow Feature Analysis

    Science.gov (United States)

    Wang, Tian; Snoussi, Hichem

    2015-01-01

    In this paper, a novel algorithm is proposed to detect abnormal events in video streams. The algorithm is based on the histogram of the optical flow orientation descriptor and the classification method. The details of the histogram of the optical flow orientation descriptor are illustrated for describing movement information of the global video frame or foreground frame. By combining one-class support vector machine and kernel principal component analysis methods, the abnormal events in the current frame can be detected after a learning period characterizing normal behaviors. The difference abnormal detection results are analyzed and explained. The proposed detection method is tested on benchmark datasets, then the experimental results show the effectiveness of the algorithm. PMID:25811227

  7. Empirical Evaluation of Different Feature Representations for Social Circles Detection

    Science.gov (United States)

    2015-06-16

    study and compare the performance on the available labelled Facebook data from the Kaggle competition on learning social circles in networks . We...Kaggle competition on learning social circles in networks [5]. The data consist of hand- labelled friendship egonets from Facebook and a set of 57...16. SECURITY CLASSIFICATION OF: Social circles detection is a special case of community detection in social network that is currently attracting a

  8. Feature Extraction and Fusion Using Deep Convolutional Neural Networks for Face Detection

    Directory of Open Access Journals (Sweden)

    Xiaojun Lu

    2017-01-01

    Full Text Available This paper proposes a method that uses feature fusion to represent images better for face detection after feature extraction by deep convolutional neural network (DCNN. First, with Clarifai net and VGG Net-D (16 layers, we learn features from data, respectively; then we fuse features extracted from the two nets. To obtain more compact feature representation and mitigate computation complexity, we reduce the dimension of the fused features by PCA. Finally, we conduct face classification by SVM classifier for binary classification. In particular, we exploit offset max-pooling to extract features with sliding window densely, which leads to better matches of faces and detection windows; thus the detection result is more accurate. Experimental results show that our method can detect faces with severe occlusion and large variations in pose and scale. In particular, our method achieves 89.24% recall rate on FDDB and 97.19% average precision on AFW.

  9. Tampa Bay Topographic/Bathymetric Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — In this joint demonstration project for the Tampa Bay region, NOAA's National Ocean Service (NOS) and the U.S. Geological Survey (USGS) have merged NOAA bathymetric...

  10. The ship edge feature detection based on high and low threshold for remote sensing image

    Science.gov (United States)

    Li, Xuan; Li, Shengyang

    2018-05-01

    In this paper, a method based on high and low threshold is proposed to detect the ship edge feature due to the low accuracy rate caused by the noise. Analyze the relationship between human vision system and the target features, and to determine the ship target by detecting the edge feature. Firstly, using the second-order differential method to enhance the quality of image; Secondly, to improvement the edge operator, we introduction of high and low threshold contrast to enhancement image edge and non-edge points, and the edge as the foreground image, non-edge as a background image using image segmentation to achieve edge detection, and remove the false edges; Finally, the edge features are described based on the result of edge features detection, and determine the ship target. The experimental results show that the proposed method can effectively reduce the number of false edges in edge detection, and has the high accuracy of remote sensing ship edge detection.

  11. The Effect of Resolution on Detecting Visually Salient Preattentive Features

    Science.gov (United States)

    2015-06-01

    resolutions in descending order (a–e). The plot compiles the areas of interest displayed in the images and each symbol represents 1 of the images. Data...to particular regions in a scene by highly salient 2 features, for example, the color of the flower discussed in the previous example. These...descending order (a–e). The plot compiles the areas of interest displayed in the images and each symbol represents 1 of the images. Data clusters

  12. Chromatic Information and Feature Detection in Fast Visual Analysis.

    Directory of Open Access Journals (Sweden)

    Maria M Del Viva

    Full Text Available The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-and-white movies provide compelling representations of real world scenes. Also, the contrast sensitivity of color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. We conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.

  13. Information Processing Features Can Detect Behavioral Regimes of Dynamical Systems

    Directory of Open Access Journals (Sweden)

    Rick Quax

    2018-01-01

    Full Text Available In dynamical systems, local interactions between dynamical units generate correlations which are stored and transmitted throughout the system, generating the macroscopic behavior. However a framework to quantify exactly how these correlations are stored, transmitted, and combined at the microscopic scale is missing. Here we propose to characterize the notion of “information processing” based on all possible Shannon mutual information quantities between a future state and all possible sets of initial states. We apply it to the 256 elementary cellular automata (ECA, which are the simplest possible dynamical systems exhibiting behaviors ranging from simple to complex. Our main finding is that only a few information features are needed for full predictability of the systemic behavior and that the “information synergy” feature is always most predictive. Finally we apply the idea to foreign exchange (FX and interest-rate swap (IRS time-series data. We find an effective “slowing down” leading indicator in all three markets for the 2008 financial crisis when applied to the information features, as opposed to using the data itself directly. Our work suggests that the proposed characterization of the local information processing of units may be a promising direction for predicting emergent systemic behaviors.

  14. Feature learning and change feature classification based on deep learning for ternary change detection in SAR images

    Science.gov (United States)

    Gong, Maoguo; Yang, Hailun; Zhang, Puzhao

    2017-07-01

    Ternary change detection aims to detect changes and group the changes into positive change and negative change. It is of great significance in the joint interpretation of spatial-temporal synthetic aperture radar images. In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison. Firstly, sparse autoencoder is used to transform log-ratio difference image into a suitable feature space for extracting key changes and suppressing outliers and noise. And then the learned features are clustered into three classes, which are taken as the pseudo labels for training a CNN model as change feature classifier. The reliable training samples for CNN are selected from the feature maps learned by sparse autoencoder with certain selection rules. Having training samples and the corresponding pseudo labels, the CNN model can be trained by using back propagation with stochastic gradient descent. During its training procedure, CNN is driven to learn the concept of change, and more powerful model is established to distinguish different types of changes. Unlike the traditional methods, the proposed framework integrates the merits of sparse autoencoder and CNN to learn more robust difference representations and the concept of change for ternary change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed framework.

  15. Improving features used for hyper-temporal land cover change detection by reducing the uncertainty in the feature extraction method

    CSIR Research Space (South Africa)

    Salmon, BP

    2017-07-01

    Full Text Available the effect which the length of a temporal sliding window has on the success of detecting land cover change. It is shown using a short Fourier transform as a feature extraction method provides meaningful robust input to a machine learning method. In theory...

  16. DroidEnsemble: Detecting Android Malicious Applications with Ensemble of String and Structural Static Features

    KAUST Repository

    Wang, Wei

    2018-05-11

    Android platform has dominated the Operating System of mobile devices. However, the dramatic increase of Android malicious applications (malapps) has caused serious software failures to Android system and posed a great threat to users. The effective detection of Android malapps has thus become an emerging yet crucial issue. Characterizing the behaviors of Android applications (apps) is essential to detecting malapps. Most existing work on detecting Android malapps was mainly based on string static features such as permissions and API usage extracted from apps. There also exists work on the detection of Android malapps with structural features, such as Control Flow Graph (CFG) and Data Flow Graph (DFG). As Android malapps have become increasingly polymorphic and sophisticated, using only one type of static features may result in false negatives. In this work, we propose DroidEnsemble that takes advantages of both string features and structural features to systematically and comprehensively characterize the static behaviors of Android apps and thus build a more accurate detection model for the detection of Android malapps. We extract each app’s string features, including permissions, hardware features, filter intents, restricted API calls, used permissions, code patterns, as well as structural features like function call graph. We then use three machine learning algorithms, namely, Support Vector Machine (SVM), k-Nearest Neighbor (kNN) and Random Forest (RF), to evaluate the performance of these two types of features and of their ensemble. In the experiments, We evaluate our methods and models with 1386 benign apps and 1296 malapps. Extensive experimental results demonstrate the effectiveness of DroidEnsemble. It achieves the detection accuracy as 95.8% with only string features and as 90.68% with only structural features. DroidEnsemble reaches the detection accuracy as 98.4% with the ensemble of both types of features, reducing 9 false positives and 12 false

  17. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Burghouts, G.J.; Eendebak, P.T.; Huis, J.R. van; Dijk, J.; Rest, J.H.C. van

    2014-01-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature

  18. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    Science.gov (United States)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  19. Speed Bump Detection Using Accelerometric Features: A Genetic Algorithm Approach.

    Science.gov (United States)

    Celaya-Padilla, Jose M; Galván-Tejada, Carlos E; López-Monteagudo, F E; Alonso-González, O; Moreno-Báez, Arturo; Martínez-Torteya, Antonio; Galván-Tejada, Jorge I; Arceo-Olague, Jose G; Luna-García, Huizilopoztli; Gamboa-Rosales, Hamurabi

    2018-02-03

    Among the current challenges of the Smart City, traffic management and maintenance are of utmost importance. Road surface monitoring is currently performed by humans, but the road surface condition is one of the main indicators of road quality, and it may drastically affect fuel consumption and the safety of both drivers and pedestrians. Abnormalities in the road, such as manholes and potholes, can cause accidents when not identified by the drivers. Furthermore, human-induced abnormalities, such as speed bumps, could also cause accidents. In addition, while said obstacles ought to be signalized according to specific road regulation, they are not always correctly labeled. Therefore, we developed a novel method for the detection of road abnormalities (i.e., speed bumps). This method makes use of a gyro, an accelerometer, and a GPS sensor mounted in a car. After having the vehicle cruise through several streets, data is retrieved from the sensors. Then, using a cross-validation strategy, a genetic algorithm is used to find a logistic model that accurately detects road abnormalities. The proposed model had an accuracy of 0.9714 in a blind evaluation, with a false positive rate smaller than 0.018, and an area under the receiver operating characteristic curve of 0.9784. This methodology has the potential to detect speed bumps in quasi real-time conditions, and can be used to construct a real-time surface monitoring system.

  20. Water Feature Extraction and Change Detection Using Multitemporal Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Komeil Rokni

    2014-05-01

    Full Text Available Lake Urmia is the 20th largest lake and the second largest hyper saline lake (before September 2010 in the world. It is also the largest inland body of salt water in the Middle East. Nevertheless, the lake has been in a critical situation in recent years due to decreasing surface water and increasing salinity. This study modeled the spatiotemporal changes of Lake Urmia in the period 2000–2013 using the multi-temporal Landsat 5-TM, 7-ETM+ and 8-OLI images. In doing so, the applicability of different satellite-derived indexes including Normalized Difference Water Index (NDWI, Modified NDWI (MNDWI, Normalized Difference Moisture Index (NDMI, Water Ratio Index (WRI, Normalized Difference Vegetation Index (NDVI, and Automated Water Extraction Index (AWEI were investigated for the extraction of surface water from Landsat data. Overall, the NDWI was found superior to other indexes and hence it was used to model the spatiotemporal changes of the lake. In addition, a new approach based on Principal Components of multi-temporal NDWI (NDWI-PCs was proposed and evaluated for surface water change detection. The results indicate an intense decreasing trend in Lake Urmia surface area in the period 2000–2013, especially between 2010 and 2013 when the lake lost about one third of its surface area compared to the year 2000. The results illustrate the effectiveness of the NDWI-PCs approach for surface water change detection, especially in detecting the changes between two and three different times, simultaneously.

  1. Speed Bump Detection Using Accelerometric Features: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Jose M. Celaya-Padilla

    2018-02-01

    Full Text Available Among the current challenges of the Smart City, traffic management and maintenance are of utmost importance. Road surface monitoring is currently performed by humans, but the road surface condition is one of the main indicators of road quality, and it may drastically affect fuel consumption and the safety of both drivers and pedestrians. Abnormalities in the road, such as manholes and potholes, can cause accidents when not identified by the drivers. Furthermore, human-induced abnormalities, such as speed bumps, could also cause accidents. In addition, while said obstacles ought to be signalized according to specific road regulation, they are not always correctly labeled. Therefore, we developed a novel method for the detection of road abnormalities (i.e., speed bumps. This method makes use of a gyro, an accelerometer, and a GPS sensor mounted in a car. After having the vehicle cruise through several streets, data is retrieved from the sensors. Then, using a cross-validation strategy, a genetic algorithm is used to find a logistic model that accurately detects road abnormalities. The proposed model had an accuracy of 0.9714 in a blind evaluation, with a false positive rate smaller than 0.018, and an area under the receiver operating characteristic curve of 0.9784. This methodology has the potential to detect speed bumps in quasi real-time conditions, and can be used to construct a real-time surface monitoring system.

  2. Functional validation of candidate genes detected by genomic feature models

    DEFF Research Database (Denmark)

    Rohde, Palle Duun; Østergaard, Solveig; Kristensen, Torsten Nygaard

    2018-01-01

    to investigate locomotor activity, and applied genomic feature prediction models to identify gene ontology (GO) cate- gories predictive of this phenotype. Next, we applied the covariance association test to partition the genomic variance of the predictive GO terms to the genes within these terms. We...... then functionally assessed whether the identified candidate genes affected locomotor activity by reducing gene expression using RNA interference. In five of the seven candidate genes tested, reduced gene expression altered the phenotype. The ranking of genes within the predictive GO term was highly correlated......Understanding the genetic underpinnings of complex traits requires knowledge of the genetic variants that contribute to phenotypic variability. Reliable statistical approaches are needed to obtain such knowledge. In genome-wide association studies, variants are tested for association with trait...

  3. Residual signal feature extraction for gearbox planetary stage fault detection

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Ursin, Thomas; Sweeney, Christian Walsted

    2017-01-01

    Faults in planetary gears and related bearings, e.g. planet bearings and planet carrier bearings, pose inherent difficulties on their accurate and consistent detection associated mainly to the low energy in slow rotating stages and the operating complexity of planetary gearboxes. In this work......, identification of the expected spectral signature for proper residual signal calculation and filtering of any frequency component not related to the planetary stage. Two field cases of planet carrier bearing defect and planet wheel spalling are presented and discussed, showing the efficiency of the followed...

  4. Evaluation of feature detection algorithms for structure from motion

    CSIR Research Space (South Africa)

    Govender, N

    2009-11-01

    Full Text Available technique with an application to stereo vision,” in International Joint Conference on Artificial Intelligence, April 1981. [17] C.Tomasi and T.Kanade, “Detection and tracking of point fetaures,” Carnegie Mellon, Tech. Rep., April 1991. [18] P. Torr... Algorithms for Structure from Motion Natasha Govender Mobile Intelligent Autonomous Systems CSIR Pretoria Email: ngovender@csir.co.za Abstract—Structure from motion is a widely-used technique in computer vision to perform 3D reconstruction. The 3D...

  5. Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models

    International Nuclear Information System (INIS)

    Khalvati, Farzad; Wong, Alexander; Haider, Masoom A.

    2015-01-01

    Prostate cancer is the most common form of cancer and the second leading cause of cancer death in North America. Auto-detection of prostate cancer can play a major role in early detection of prostate cancer, which has a significant impact on patient survival rates. While multi-parametric magnetic resonance imaging (MP-MRI) has shown promise in diagnosis of prostate cancer, the existing auto-detection algorithms do not take advantage of abundance of data available in MP-MRI to improve detection accuracy. The goal of this research was to design a radiomics-based auto-detection method for prostate cancer via utilizing MP-MRI data. In this work, we present new MP-MRI texture feature models for radiomics-driven detection of prostate cancer. In addition to commonly used non-invasive imaging sequences in conventional MP-MRI, namely T2-weighted MRI (T2w) and diffusion-weighted imaging (DWI), our proposed MP-MRI texture feature models incorporate computed high-b DWI (CHB-DWI) and a new diffusion imaging modality called correlated diffusion imaging (CDI). Moreover, the proposed texture feature models incorporate features from individual b-value images. A comprehensive set of texture features was calculated for both the conventional MP-MRI and new MP-MRI texture feature models. We performed feature selection analysis for each individual modality and then combined best features from each modality to construct the optimized texture feature models. The performance of the proposed MP-MRI texture feature models was evaluated via leave-one-patient-out cross-validation using a support vector machine (SVM) classifier trained on 40,975 cancerous and healthy tissue samples obtained from real clinical MP-MRI datasets. The proposed MP-MRI texture feature models outperformed the conventional model (i.e., T2w+DWI) with regard to cancer detection accuracy. Comprehensive texture feature models were developed for improved radiomics-driven detection of prostate cancer using MP-MRI. Using a

  6. Delving Deep into Multiscale Pedestrian Detection via Single Scale Feature Maps

    Directory of Open Access Journals (Sweden)

    Xinchuan Fu

    2018-04-01

    Full Text Available The standard pipeline in pedestrian detection is sliding a pedestrian model on an image feature pyramid to detect pedestrians of different scales. In this pipeline, feature pyramid construction is time consuming and becomes the bottleneck for fast detection. Recently, a method called multiresolution filtered channels (MRFC was proposed which only used single scale feature maps to achieve fast detection. However, there are two shortcomings in MRFC which limit its accuracy. One is that the receptive field correspondence in different scales is weak. Another is that the features used are not scale invariance. In this paper, two solutions are proposed to tackle with the two shortcomings respectively. Specifically, scale-aware pooling is proposed to make a better receptive field correspondence, and soft decision tree is proposed to relive scale variance problem. When coupled with efficient sliding window classification strategy, our detector achieves fast detecting speed at the same time with state-of-the-art accuracy.

  7. Detection of Vandalism in Wikipedia using Metadata Features – Implementation in Simple English and Albanian sections

    Directory of Open Access Journals (Sweden)

    Arsim Susuri

    2017-03-01

    Full Text Available In this paper, we evaluate a list of classifiers in order to use them in the detection of vandalism by focusing on metadata features. Our work is focused on two low resource data sets (Simple English and Albanian from Wikipedia. The aim of this research is to prove that this form of vandalism detection applied in one data set (language can be extended into another data set (language. Article views data sets in Wikipedia have been used rarely for the purpose of detecting vandalism. We will show the benefits of using article views data set with features from the article revisions data set with the aim of improving the detection of vandalism. The key advantage of using metadata features is that these metadata features are language independent and simple to extract because they require minimal processing. This paper shows that application of vandalism models across low resource languages is possible, and vandalism can be detected through view patterns of articles.

  8. Automatic detection of solar features in HSOS full-disk solar images using guided filter

    Science.gov (United States)

    Yuan, Fei; Lin, Jiaben; Guo, Jingjing; Wang, Gang; Tong, Liyue; Zhang, Xinwei; Wang, Bingxiang

    2018-02-01

    A procedure is introduced for the automatic detection of solar features using full-disk solar images from Huairou Solar Observing Station (HSOS), National Astronomical Observatories of China. In image preprocessing, median filter is applied to remove the noises. Guided filter is adopted to enhance the edges of solar features and restrain the solar limb darkening, which is first introduced into the astronomical target detection. Then specific features are detected by Otsu algorithm and further threshold processing technique. Compared with other automatic detection procedures, our procedure has some advantages such as real time and reliability as well as no need of local threshold. Also, it reduces the amount of computation largely, which is benefited from the efficient guided filter algorithm. The procedure has been tested on one month sequences (December 2013) of HSOS full-disk solar images and the result shows that the number of features detected by our procedure is well consistent with the manual one.

  9. Detecting submerged features in water: modeling, sensors, and measurements

    Science.gov (United States)

    Bostater, Charles R., Jr.; Bassetti, Luce

    2004-11-01

    It is becoming more important to understand the remote sensing systems and associated autonomous or semi-autonomous methodologies (robotic & mechatronics) that may be utilized in freshwater and marine aquatic environments. This need comes from several issues related not only to advances in our scientific understanding and technological capabilities, but also from the desire to insure that the risk associated with UXO (unexploded ordnance), related submerged mines, as well as submerged targets (such as submerged aquatic vegetation) and debris left from previous human activities are remotely sensed and identified followed by reduced risks through detection and removal. This paper will describe (a) remote sensing systems, (b) platforms (fixed and mobile, as well as to demonstrate (c) the value of thinking in terms of scalability as well as modularity in the design and application of new systems now being constructed within our laboratory and other laboratories, as well as future systems. New remote sensing systems - moving or fixed sensing systems, as well as autonomous or semi-autonomous robotic and mechatronic systems will be essential to secure domestic preparedness for humanitarian reasons. These remote sensing systems hold tremendous value, if thoughtfully designed for other applications which include environmental monitoring in ambient environments.

  10. MRI for the detection of calcific features of vertebral haemangioma.

    Science.gov (United States)

    Bender, Y Y; Böker, S M; Diederichs, G; Walter, T; Wagner, M; Fallenberg, E; Liebig, T; Rickert, M; Hamm, B; Makowski, M R

    2017-08-01

    To evaluate the diagnostic performance of susceptibility-weighted-magnetic-resonance imaging (SW-MRI) for the detection of vertebral haemangiomas (VHs) compared to T1/T2-weighted MRI sequences, radiographs, and computed tomography (CT). The study was approved by the local ethics review board. An SW-MRI sequence was added to the clinical spine imaging protocol. The image-based diagnosis of 56 VHs in 46 patients was established using T1/T2 MRI in combination with radiography/CT as the reference standard. VHs were assessed based on T1/T2-weighted MRI images alone and in combination with SW-MRI, while radiographs/CT images were excluded from the analysis. Fifty-one of 56 VHs could be identified on T1/T2 MRI images alone, if radiographs/CT images were excluded from analysis. In five cases (9.1%), additional radiographs/CT images were required for the imaging-based diagnosis. If T1/T2 and SW-MRI images were used in combination, all VHs could be diagnosed, without the need for radiography/CT. Size measurements revealed a close correlation between CT and SW-MRI (R 2 =0.94; pspine, as the use of additional CT/radiography can be minimized. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  11. A systematic exploration of the micro-blog feature space for teens stress detection.

    Science.gov (United States)

    Zhao, Liang; Li, Qi; Xue, Yuanyuan; Jia, Jia; Feng, Ling

    2016-01-01

    In the modern stressful society, growing teenagers experience severe stress from different aspects from school to friends, from self-cognition to inter-personal relationship, which negatively influences their smooth and healthy development. Being timely and accurately aware of teenagers psychological stress and providing effective measures to help immature teenagers to cope with stress are highly valuable to both teenagers and human society. Previous work demonstrates the feasibility to sense teenagers' stress from their tweeting contents and context on the open social media platform-micro-blog. However, a tweet is still too short for teens to express their stressful status in a comprehensive way. Considering the topic continuity from the tweeting content to the follow-up comments and responses between the teenager and his/her friends, we combine the content of comments and responses under the tweet to supplement the tweet content. Also, such friends' caring comments like "what happened?", "Don't worry!", "Cheer up!", etc. provide hints to teenager's stressful status. Hence, in this paper, we propose to systematically explore the micro-blog feature space, comprised of four kinds of features [tweeting content features (FW), posting features (FP), interaction features (FI), and comment-response features (FC) between teenagers and friends] for teenager' stress category and stress level detection. We extract and analyze these feature values and their impacts on teens stress detection. We evaluate the framework through a real user study of 36 high school students aged 17. Different classifiers are employed to detect potential stress categories and corresponding stress levels. Experimental results show that all the features in the feature space positively affect stress detection, and linguistic negative emotion, proportion of negative sentences, friends' caring comments and teen's reply rate play more significant roles than the rest features. Micro-blog platform provides

  12. Learning Rich Features from RGB-D Images for Object Detection and Segmentation

    OpenAIRE

    Gupta, Saurabh; Girshick, Ross; Arbeláez, Pablo; Malik, Jitendra

    2014-01-01

    In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an av...

  13. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    OpenAIRE

    Bouma, H.; Baan, J.; Burghouts, G.J.; Eendebak, P.T.; Huis, J.R. van; Dijk, J.; Rest, J.H.C. van

    2014-01-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature computation and pickpocket recognition. This is challenging because the environment is crowded, people move freely through areas which cannot be covered by a single camera, because the actual snat...

  14. Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images

    Science.gov (United States)

    Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.

    2018-04-01

    A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.

  15. Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture

    Science.gov (United States)

    West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID

    2011-09-27

    Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.

  16. A modular CUDA-based framework for scale-space feature detection in video streams

    International Nuclear Information System (INIS)

    Kinsner, M; Capson, D; Spence, A

    2010-01-01

    Multi-scale image processing techniques enable extraction of features where the size of a feature is either unknown or changing, but the requirement to process image data at multiple scale levels imposes a substantial computational load. This paper describes the architecture and emerging results from the implementation of a GPGPU-accelerated scale-space feature detection framework for video processing. A discrete scale-space representation is generated for image frames within a video stream, and multi-scale feature detection metrics are applied to detect ridges and Gaussian blobs at video frame rates. A modular structure is adopted, in which common feature extraction tasks such as non-maximum suppression and local extrema search may be reused across a variety of feature detectors. Extraction of ridge and blob features is achieved at faster than 15 frames per second on video sequences from a machine vision system, utilizing an NVIDIA GTX 480 graphics card. By design, the framework is easily extended to additional feature classes through the inclusion of feature metrics to be applied to the scale-space representation, and using common post-processing modules to reduce the required CPU workload. The framework is scalable across multiple and more capable GPUs, and enables previously intractable image processing at video frame rates using commodity computational hardware.

  17. INTEGRATION OF IMAGE-DERIVED AND POS-DERIVED FEATURES FOR IMAGE BLUR DETECTION

    Directory of Open Access Journals (Sweden)

    T.-A. Teo

    2016-06-01

    Full Text Available The image quality plays an important role for Unmanned Aerial Vehicle (UAV’s applications. The small fixed wings UAV is suffering from the image blur due to the crosswind and the turbulence. Position and Orientation System (POS, which provides the position and orientation information, is installed onto an UAV to enable acquisition of UAV trajectory. It can be used to calculate the positional and angular velocities when the camera shutter is open. This study proposes a POS-assisted method to detect the blur image. The major steps include feature extraction, blur image detection and verification. In feature extraction, this study extracts different features from images and POS. The image-derived features include mean and standard deviation of image gradient. For POS-derived features, we modify the traditional degree-of-linear-blur (blinear method to degree-of-motion-blur (bmotion based on the collinear condition equations and POS parameters. Besides, POS parameters such as positional and angular velocities are also adopted as POS-derived features. In blur detection, this study uses Support Vector Machines (SVM classifier and extracted features (i.e. image information, POS data, blinear and bmotion to separate blur and sharp UAV images. The experiment utilizes SenseFly eBee UAV system. The number of image is 129. In blur image detection, we use the proposed degree-of-motion-blur and other image features to classify the blur image and sharp images. The classification result shows that the overall accuracy using image features is only 56%. The integration of image-derived and POS-derived features have improved the overall accuracy from 56% to 76% in blur detection. Besides, this study indicates that the performance of the proposed degree-of-motion-blur is better than the traditional degree-of-linear-blur.

  18. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Sungho Kim

    2016-07-01

    Full Text Available Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR images or infrared (IR images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter and an asymmetric morphological closing filter (AMCF, post-filter into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic

  19. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Science.gov (United States)

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  20. Attention in the processing of complex visual displays: detecting features and their combinations.

    Science.gov (United States)

    Farell, B

    1984-02-01

    The distinction between operations in visual processing that are parallel and preattentive and those that are serial and attentional receives both theoretical and empirical support. According to Treisman's feature-integration theory, independent features are available preattentively, but attention is required to veridically combine features into objects. Certain evidence supporting this theory is consistent with a different interpretation, which was tested in four experiments. The first experiment compared the detection of features and feature combinations while eliminating a factor that confounded earlier comparisons. The resulting priority of access to combinatorial information suggests that features and nonlocal combinations of features are not connected solely by a bottom-up hierarchical convergence. Causes of the disparity between the results of Experiment 1 and the results of previous research were investigated in three subsequent experiments. The results showed that of the two confounded factors, it was the difference in the mapping of alternatives onto responses, not the differing attentional demands of features and objects, that underlaid the results of the previous research. The present results are thus counterexamples to the feature-integration theory. Aspects of this theory are shown to be subsumed by more general principles, which are discussed in terms of attentional processes in the detection of features, objects, and stimulus alternatives.

  1. Attentional effects on preattentive vision: Spatial precues affect the detection of simple features

    NARCIS (Netherlands)

    Theeuwes, J.; Kramer, A.F.; Atchley, P.

    1999-01-01

    Most accounts of visual perception hold that the detection of primitive features occurs preattentively, in parallel across the visual field. Evidence that preattentive vision operates without attentional limitations comes from visual search tasks in which the detection of the presence or absence of

  2. Detection of emotional faces: salient physical features guide effective visual search.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  3. Detection of corn and weed species by the combination of spectral, shape and textural features

    Science.gov (United States)

    Accurate detection of weeds in farmland can help reduce pesticide use and protect the agricultural environment. To develop intelligent equipment for weed detection, this study used an imaging spectrometer system, which supports micro-scale plant feature analysis by acquiring high-resolution hyper sp...

  4. Feature Detection of Curve Traffic Sign Image on The Bandung - Jakarta Highway

    Science.gov (United States)

    Naseer, M.; Supriadi, I.; Supangkat, S. H.

    2018-03-01

    Unsealed roadside and problems with the road surface are common causes of road crashes, particularly when those are combined with curves. Curve traffic sign is an important component for giving early warning to driver on traffic, especially on high-speed traffic like on the highway. Traffic sign detection has became a very interesting research now, and in this paper will be discussed about the detection of curve traffic sign. There are two types of curve signs are discussed, namely the curve turn to the left and the curve turn to the right and the all data sample used are the curves taken / recorded from some signs on the Bandung - Jakarta Highway. Feature detection of the curve signs use Speed Up Robust Feature (SURF) method, where the detected scene image is 800x450. From 45 curve turn to the right images, the system can detect the feature well to 35 images, where the success rate is 77,78%, while from the 45 curve turn to the left images, the system can detect the feature well to 34 images and the success rate is 75,56%, so the average accuracy in the detection process is 76,67%. While the average time for the detection process is 0.411 seconds.

  5. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    Science.gov (United States)

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  6. Personalized features for attention detection in children with Attention Deficit Hyperactivity Disorder.

    Science.gov (United States)

    Fahimi, Fatemeh; Guan, Cuntai; Wooi Boon Goh; Kai Keng Ang; Choon Guan Lim; Tih Shih Lee

    2017-07-01

    Measuring attention from electroencephalogram (EEG) has found applications in the treatment of Attention Deficit Hyperactivity Disorder (ADHD). It is of great interest to understand what features in EEG are most representative of attention. Intensive research has been done in the past and it has been proven that frequency band powers and their ratios are effective features in detecting attention. However, there are still unanswered questions, like, what features in EEG are most discriminative between attentive and non-attentive states? Are these features common among all subjects or are they subject-specific and must be optimized for each subject? Using Mutual Information (MI) to perform subject-specific feature selection on a large data set including 120 ADHD children, we found that besides theta beta ratio (TBR) which is commonly used in attention detection and neurofeedback, the relative beta power and theta/(alpha+beta) (TBAR) are also equally significant and informative for attention detection. Interestingly, we found that the relative theta power (which is also commonly used) may not have sufficient discriminative information itself (it is informative only for 3.26% of ADHD children). We have also demonstrated that although these features (relative beta power, TBR and TBAR) are the most important measures to detect attention on average, different subjects have different set of most discriminative features.

  7. Exploiting Higher Order and Multi-modal Features for 3D Object Detection

    DEFF Research Database (Denmark)

    Kiforenko, Lilita

    that describe object visual appearance such as shape, colour, texture etc. This thesis focuses on robust object detection and pose estimation of rigid objects using 3D information. The thesis main contributions are novel feature descriptors together with object detection and pose estimation algorithms....... The initial work introduces a feature descriptor that uses edge categorisation in combination with a local multi-modal histogram descriptor in order to detect objects with little or no texture or surface variation. The comparison is performed with a state-of-the-art method, which is outperformed...... of the methods work well for one type of objects in a specific scenario, in another scenario or with different objects they might fail, therefore more robust solutions are required. The typical problem solution is the design of robust feature descriptors, where feature descriptors contain information...

  8. Automated mitosis detection using texture, SIFT features and HMAX biologically inspired approach.

    Science.gov (United States)

    Irshad, Humayun; Jalali, Sepehr; Roux, Ludovic; Racoceanu, Daniel; Hwee, Lim Joo; Naour, Gilles Le; Capron, Frédérique

    2013-01-01

    According to Nottingham grading system, mitosis count in breast cancer histopathology is one of three components required for cancer grading and prognosis. Manual counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. The aim is to investigate the various texture features and Hierarchical Model and X (HMAX) biologically inspired approach for mitosis detection using machine-learning techniques. We propose an approach that assists pathologists in automated mitosis detection and counting. The proposed method, which is based on the most favorable texture features combination, examines the separability between different channels of color space. Blue-ratio channel provides more discriminative information for mitosis detection in histopathological images. Co-occurrence features, run-length features, and Scale-invariant feature transform (SIFT) features were extracted and used in the classification of mitosis. Finally, a classification is performed to put the candidate patch either in the mitosis class or in the non-mitosis class. Three different classifiers have been evaluated: Decision tree, linear kernel Support Vector Machine (SVM), and non-linear kernel SVM. We also evaluate the performance of the proposed framework using the modified biologically inspired model of HMAX and compare the results with other feature extraction methods such as dense SIFT. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for an International Conference on Pattern Recognition (ICPR) 2012 contest. The proposed framework achieved 76% recall, 75% precision and 76% F-measure. Different frameworks for classification have been evaluated for mitosis detection. In future work, instead of regions, we intend to compute features on the results of mitosis contour segmentation and use them to improve detection and classification rate.

  9. Automated mitosis detection using texture, SIFT features and HMAX biologically inspired approach

    Directory of Open Access Journals (Sweden)

    Humayun Irshad

    2013-01-01

    Full Text Available Context: According to Nottingham grading system, mitosis count in breast cancer histopathology is one of three components required for cancer grading and prognosis. Manual counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. Aims: The aim is to investigate the various texture features and Hierarchical Model and X (HMAX biologically inspired approach for mitosis detection using machine-learning techniques. Materials and Methods: We propose an approach that assists pathologists in automated mitosis detection and counting. The proposed method, which is based on the most favorable texture features combination, examines the separability between different channels of color space. Blue-ratio channel provides more discriminative information for mitosis detection in histopathological images. Co-occurrence features, run-length features, and Scale-invariant feature transform (SIFT features were extracted and used in the classification of mitosis. Finally, a classification is performed to put the candidate patch either in the mitosis class or in the non-mitosis class. Three different classifiers have been evaluated: Decision tree, linear kernel Support Vector Machine (SVM, and non-linear kernel SVM. We also evaluate the performance of the proposed framework using the modified biologically inspired model of HMAX and compare the results with other feature extraction methods such as dense SIFT. Results: The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS dataset provided for an International Conference on Pattern Recognition (ICPR 2012 contest. The proposed framework achieved 76% recall, 75% precision and 76% F-measure. Conclusions: Different frameworks for classification have been evaluated for mitosis detection. In future work, instead of regions, we intend to compute features on the results of mitosis contour segmentation and use them to improve detection and

  10. Exploration of available feature detection and identification systems and their performance on radiographs

    Science.gov (United States)

    Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.

    2016-10-01

    Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.

  11. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  12. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  13. The effect of bathymetric filtering on nearshore process model results

    Science.gov (United States)

    Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.

    2009-01-01

    Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.

  14. Detailed bathymetric surveys in the central Indian Basin

    Digital Repository Service at National Institute of Oceanography (India)

    Kodagali, V.N.; KameshRaju, K.A.; Ramprasad, T.; George, P.; Jaisankar, S.

    Over 420,000 line kilometers of echo-sounding data was collected in the Central Indian Basin. This data was digitized, merged with navigation data and a detailed bathymetric map of the Basin was prepared. The Basin can be broadly classified...

  15. Infrared video based gas leak detection method using modified FAST features

    Science.gov (United States)

    Wang, Min; Hong, Hanyu; Huang, Likun

    2018-03-01

    In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.

  16. LC-IMS-MS Feature Finder: detecting multidimensional liquid chromatography, ion mobility and mass spectrometry features in complex datasets.

    Science.gov (United States)

    Crowell, Kevin L; Slysz, Gordon W; Baker, Erin S; LaMarche, Brian L; Monroe, Matthew E; Ibrahim, Yehia M; Payne, Samuel H; Anderson, Gordon A; Smith, Richard D

    2013-11-01

    The addition of ion mobility spectrometry to liquid chromatography-mass spectrometry experiments requires new, or updated, software tools to facilitate data processing. We introduce a command line software application LC-IMS-MS Feature Finder that searches for molecular ion signatures in multidimensional liquid chromatography-ion mobility spectrometry-mass spectrometry (LC-IMS-MS) data by clustering deisotoped peaks with similar monoisotopic mass, charge state, LC elution time and ion mobility drift time values. The software application includes an algorithm for detecting and quantifying co-eluting chemical species, including species that exist in multiple conformations that may have been separated in the IMS dimension. LC-IMS-MS Feature Finder is available as a command-line tool for download at http://omics.pnl.gov/software/LC-IMS-MS_Feature_Finder.php. The Microsoft.NET Framework 4.0 is required to run the software. All other dependencies are included with the software package. Usage of this software is limited to non-profit research to use (see README). rds@pnnl.gov. Supplementary data are available at Bioinformatics online.

  17. Combining heterogeneous features for colonic polyp detection in CTC based on semi-definite programming

    Science.gov (United States)

    Wang, Shijun; Yao, Jianhua; Petrick, Nicholas A.; Summers, Ronald M.

    2009-02-01

    Colon cancer is the second leading cause of cancer-related deaths in the United States. Computed tomographic colonography (CTC) combined with a computer aided detection system provides a feasible combination for improving colonic polyps detection and increasing the use of CTC for colon cancer screening. To distinguish true polyps from false positives, various features extracted from polyp candidates have been proposed. Most of these features try to capture the shape information of polyp candidates or neighborhood knowledge about the surrounding structures (fold, colon wall, etc.). In this paper, we propose a new set of shape descriptors for polyp candidates based on statistical curvature information. These features, called histogram of curvature features, are rotation, translation and scale invariant and can be treated as complementing our existing feature set. Then in order to make full use of the traditional features (defined as group A) and the new features (group B) which are highly heterogeneous, we employed a multiple kernel learning method based on semi-definite programming to identify an optimized classification kernel based on the combined set of features. We did leave-one-patient-out test on a CTC dataset which contained scans from 50 patients (with 90 6-9mm polyp detections). Experimental results show that a support vector machine (SVM) based on the combined feature set and the semi-definite optimization kernel achieved higher FROC performance compared to SVMs using the two groups of features separately. At a false positive per patient rate of 7, the sensitivity on 6-9mm polyps using the combined features improved from 0.78 (Group A) and 0.73 (Group B) to 0.82 (p<=0.01).

  18. Unravel Spurious Bathymetric Highs on the Western Continental Margin of India

    Science.gov (United States)

    Mahale, V. P.

    2017-12-01

    Swath mapping multibeam echosounder systems (MBES) have become a de-facto-standard component on today's research vessel (RV). Modern MBES provide high temporal and spatial resolution for mapping the seabed morphology. Improved resolution capabilities requires large hull mounted transceivers, which after installation undergoes calibration procedure during the sea acceptance test (SAT). To accurately estimate various vessel offsets and lever-arm corrections, the installer runs calibration lines over a prominent seabed feature. In the year 2014, while conducting SAT for the RV Sindhu Sadhana and calibrate the ATLAS make MBES system, a hunt was on to find suitable bathymetric highs in the region of operation. Regional hydrographic charts published by the National Hydrographic Office, in India were referred to locate such features. Two bathymetric highs were spotted on the chart that are 20 km apart and 40 km west of the shelf-edge on the Western Continental Margin of India. The charted depth on these highs are 252 m and 343 m on a relatively even but moderately sloppy seabed, representing an isolated elevations of 900 m. The geographic locations of these knolls were verified with the GEBCO's 30-arc second gridded bathymetry, before heading out for the waypoints. There were no signs of knolls at those locations, indicating erroneous georeferencing. Hence, the region was subsequently revisited in the following years until an area of 3000 sq. km was mapped. Failing to locate the bathymetric highs they are referred to as 'spurious'. Investigation was planned to unravel the rationale of existence and sustenance of these knolls in the hydrographic charts since historic time. Tweaking the MBES settings reveals existence of strong acoustic scattering layer, to which even the depth tracking gate gets locked-on and is documented. Analogically, in the past, ships transecting the region equipped with single beam echosounder tuned for shallow depth operations might have charted the

  19. A ROC-based feature selection method for computer-aided detection and diagnosis

    Science.gov (United States)

    Wang, Songyuan; Zhang, Guopeng; Liao, Qimei; Zhang, Junying; Jiao, Chun; Lu, Hongbing

    2014-03-01

    Image-based computer-aided detection and diagnosis (CAD) has been a very active research topic aiming to assist physicians to detect lesions and distinguish them from benign to malignant. However, the datasets fed into a classifier usually suffer from small number of samples, as well as significantly less samples available in one class (have a disease) than the other, resulting in the classifier's suboptimal performance. How to identifying the most characterizing features of the observed data for lesion detection is critical to improve the sensitivity and minimize false positives of a CAD system. In this study, we propose a novel feature selection method mR-FAST that combines the minimal-redundancymaximal relevance (mRMR) framework with a selection metric FAST (feature assessment by sliding thresholds) based on the area under a ROC curve (AUC) generated on optimal simple linear discriminants. With three feature datasets extracted from CAD systems for colon polyps and bladder cancer, we show that the space of candidate features selected by mR-FAST is more characterizing for lesion detection with higher AUC, enabling to find a compact subset of superior features at low cost.

  20. A COMPARATIVE ANALYSIS OF SINGLE AND COMBINATION FEATURE EXTRACTION TECHNIQUES FOR DETECTING CERVICAL CANCER LESIONS

    Directory of Open Access Journals (Sweden)

    S. Pradeep Kumar Kenny

    2016-02-01

    Full Text Available Cervical cancer is the third most common form of cancer affecting women especially in third world countries. The predominant reason for such alarming rate of death is primarily due to lack of awareness and proper health care. As they say, prevention is better than cure, a better strategy has to be put in place to screen a large number of women so that an early diagnosis can help in saving their lives. One such strategy is to implement an automated system. For an automated system to function properly a proper set of features have to be extracted so that the cancer cell can be detected efficiently. In this paper we compare the performances of detecting a cancer cell using a single feature versus a combination feature set technique to see which will suit the automated system in terms of higher detection rate. For this each cell is segmented using multiscale morphological watershed segmentation technique and a series of features are extracted. This process is performed on 967 images and the data extracted is subjected to data mining techniques to determine which feature is best for which stage of cancer. The results thus obtained clearly show a higher percentage of success for combination feature set with 100% accurate detection rate.

  1. 2006 NOAA Bathymetric Lidar: Puerto Rico (Southwest)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set (Project Number OPR-I305-KRL-06) depicts depth values (mean 5 meter gridded) collected using LiDAR (Light Detection & Ranging) from the shoreline...

  2. Fabric defect detection based on visual saliency using deep feature and low-rank recovery

    Science.gov (United States)

    Liu, Zhoufeng; Wang, Baorui; Li, Chunlei; Li, Bicao; Dong, Yan

    2018-04-01

    Fabric defect detection plays an important role in improving the quality of fabric product. In this paper, a novel fabric defect detection method based on visual saliency using deep feature and low-rank recovery was proposed. First, unsupervised training is carried out by the initial network parameters based on MNIST large datasets. The supervised fine-tuning of fabric image library based on Convolutional Neural Networks (CNNs) is implemented, and then more accurate deep neural network model is generated. Second, the fabric images are uniformly divided into the image block with the same size, then we extract their multi-layer deep features using the trained deep network. Thereafter, all the extracted features are concentrated into a feature matrix. Third, low-rank matrix recovery is adopted to divide the feature matrix into the low-rank matrix which indicates the background and the sparse matrix which indicates the salient defect. In the end, the iterative optimal threshold segmentation algorithm is utilized to segment the saliency maps generated by the sparse matrix to locate the fabric defect area. Experimental results demonstrate that the feature extracted by CNN is more suitable for characterizing the fabric texture than the traditional LBP, HOG and other hand-crafted features extraction method, and the proposed method can accurately detect the defect regions of various fabric defects, even for the image with complex texture.

  3. Feature selection and classifier parameters estimation for EEG signals peak detection using particle swarm optimization.

    Science.gov (United States)

    Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.

  4. Multi-Feature Based Multiple Landmine Detection Using Ground Penetration Radar

    Directory of Open Access Journals (Sweden)

    S. Park

    2014-06-01

    Full Text Available This paper presents a novel method for detection of multiple landmines using a ground penetrating radar (GPR. Conventional algorithms mainly focus on detection of a single landmine, which cannot linearly extend to the multiple landmine case. The proposed algorithm is composed of four steps; estimation of the number of multiple objects buried in the ground, isolation of each object, feature extraction and detection of landmines. The number of objects in the GPR signal is estimated by using the energy projection method. Then signals for the objects are extracted by using the symmetry filtering method. Each signal is then processed for features, which are given as input to the support vector machine (SVM for landmine detection. Three landmines buried in various ground conditions are considered for the test of the proposed method. They demonstrate that the proposed method can successfully detect multiple landmines.

  5. A General Purpose Feature Extractor for Light Detection and Ranging Data

    Directory of Open Access Journals (Sweden)

    Edwin B. Olson

    2010-11-01

    Full Text Available Feature extraction is a central step of processing Light Detection and Ranging (LIDAR data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.

  6. A general purpose feature extractor for light detection and ranging data.

    Science.gov (United States)

    Li, Yangming; Olson, Edwin B

    2010-01-01

    Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.

  7. An Improved Semisupervised Outlier Detection Algorithm Based on Adaptive Feature Weighted Clustering

    Directory of Open Access Journals (Sweden)

    Tingquan Deng

    2016-01-01

    Full Text Available There exist already various approaches to outlier detection, in which semisupervised methods achieve encouraging superiority due to the introduction of prior knowledge. In this paper, an adaptive feature weighted clustering-based semisupervised outlier detection strategy is proposed. This method maximizes the membership degree of a labeled normal object to the cluster it belongs to and minimizes the membership degrees of a labeled outlier to all clusters. In consideration of distinct significance of features or components in a dataset in determining an object being an inlier or outlier, each feature is adaptively assigned different weights according to the deviation degrees between this feature of all objects and that of a certain cluster prototype. A series of experiments on a synthetic dataset and several real-world datasets are implemented to verify the effectiveness and efficiency of the proposal.

  8. Max-AUC feature selection in computer-aided detection of polyps in CT colonography.

    Science.gov (United States)

    Xu, Jian-Wu; Suzuki, Kenji

    2014-03-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level.

  9. Towards Stable Adversarial Feature Learning for LiDAR based Loop Closure Detection

    OpenAIRE

    Xu, Lingyun; Yin, Peng; Luo, Haibo; Liu, Yunhui; Han, Jianda

    2017-01-01

    Stable feature extraction is the key for the Loop closure detection (LCD) task in the simultaneously localization and mapping (SLAM) framework. In our paper, the feature extraction is operated by using a generative adversarial networks (GANs) based unsupervised learning. GANs are powerful generative models, however, GANs based adversarial learning suffers from training instability. We find that the data-code joint distribution in the adversarial learning is a more complex manifold than in the...

  10. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized

  11. Significance of MPEG-7 textural features for improved mass detection in mammography.

    Science.gov (United States)

    Eltonsy, Nevine H; Tourassi, Georgia D; Fadeev, Aleksey; Elmaghraby, Adel S

    2006-01-01

    The purpose of the study is to investigate the significance of MPEG-7 textural features for improving the detection of masses in screening mammograms. The detection scheme was originally based on morphological directional neighborhood features extracted from mammographic regions of interest (ROIs). Receiver Operating Characteristics (ROC) was performed to evaluate the performance of each set of features independently and merged into a back-propagation artificial neural network (BPANN) using the leave-one-out sampling scheme (LOOSS). The study was based on a database of 668 mammographic ROIs (340 depicting cancer regions and 328 depicting normal parenchyma). Overall, the ROC area index of the BPANN using the directional morphological features was Az=0.85+/-0.01. The MPEG-7 edge histogram descriptor-based BPNN showed an ROC area index of Az=0.71+/-0.01 while homogeneous textural descriptors using 30 and 120 channels helped the BPNN achieve similar ROC area indexes of Az=0.882+/-0.02 and Az=0.877+/-0.01 respectively. After merging the MPEG-7 homogeneous textural features with the directional neighborhood features the performance of the BPANN increased providing an ROC area index of Az=0.91+/-0.01. MPEG-7 homogeneous textural descriptor significantly improved the morphology-based detection scheme.

  12. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  13. Digital mammography: Mixed feature neural network with spectral entropy decision for detection of microcalcifications

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, B. [Univ. of South Florida, Tampa, FL (United States)]|[Nanjing Univ. of Posts and Telecommunications (China). Dept. of Telecommunication Engineering; Qian, W.; Clarke, L.P. [Univ. of South Florida, Tampa, FL (United States)

    1996-10-01

    A computationally efficient mixed feature based neural network (MFNN) is proposed for the detection of microcalcification clusters (MCC`s) in digitized mammograms. The MFNN employs features computed in both the spatial and spectral domain and uses spectral entropy as a decision parameter. Backpropagation with Kalman Filtering (KF) is employed to allow more efficient network training as required for evaluation of different features, input images, and related error analysis. A previously reported, wavelet-based image-enhancement method is also employed to enhance microcalcification clusters for improved detection. The relative performance of the MFNN for both the raw and enhanced images is evaluated using a common image database of 30 digitized mammograms, with 20 images containing 21 biopsy proven MCC`s and ten normal cases. The computed sensitivity (true positive (TP) detection rate) was 90.1% with an average low false positive (FP) detection of 0.71 MCCs/image for the enhanced images using a modified k-fold validation error estimation technique. The corresponding computed sensitivity for the raw images was reduced to 81.4% and with 0.59 FP`s MCCs/image. A relative comparison to an earlier neural network (NN) design, using only spatially related features, suggests the importance of the addition of spectral domain features when the raw image data are analyzed.

  14. Digital mammography: Mixed feature neural network with spectral entropy decision for detection of microcalcifications

    International Nuclear Information System (INIS)

    Zheng, B.

    1996-01-01

    A computationally efficient mixed feature based neural network (MFNN) is proposed for the detection of microcalcification clusters (MCC's) in digitized mammograms. The MFNN employs features computed in both the spatial and spectral domain and uses spectral entropy as a decision parameter. Backpropagation with Kalman Filtering (KF) is employed to allow more efficient network training as required for evaluation of different features, input images, and related error analysis. A previously reported, wavelet-based image-enhancement method is also employed to enhance microcalcification clusters for improved detection. The relative performance of the MFNN for both the raw and enhanced images is evaluated using a common image database of 30 digitized mammograms, with 20 images containing 21 biopsy proven MCC's and ten normal cases. The computed sensitivity (true positive (TP) detection rate) was 90.1% with an average low false positive (FP) detection of 0.71 MCCs/image for the enhanced images using a modified k-fold validation error estimation technique. The corresponding computed sensitivity for the raw images was reduced to 81.4% and with 0.59 FP's MCCs/image. A relative comparison to an earlier neural network (NN) design, using only spatially related features, suggests the importance of the addition of spectral domain features when the raw image data are analyzed

  15. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    Science.gov (United States)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  16. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; van Huis, Jasper R.; Dijk, Judith; van Rest, Jeroen H. C.

    2014-10-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature computation and pickpocket recognition. This is challenging because the environment is crowded, people move freely through areas which cannot be covered by a single camera, because the actual snatch is a subtle action, and because collaboration is complex social behavior. We carried out an experiment with more than 20 validated pickpocket incidents. We used a top-down approach to translate expert knowledge in features and rules, and a bottom-up approach to learn discriminating patterns with a classifier. The classifier was used to separate the pickpockets from normal passers-by who are shopping in the mall. We performed a cross validation to train and evaluate our system. In this paper, we describe our method, identify the most valuable features, and analyze the results that were obtained in the experiment. We estimate the quality of these features and the performance of automatic detection of (collaborating) pickpockets. The results show that many of the pickpockets can be detected at a low false alarm rate.

  17. A new feature constituting approach to detection of vocal fold pathology

    Science.gov (United States)

    Hariharan, M.; Polat, Kemal; Yaacob, Sazali

    2014-08-01

    In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.

  18. Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity

    Science.gov (United States)

    Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin

    2017-07-01

    Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.

  19. Multiple-Features-Based Semisupervised Clustering DDoS Detection Method

    Directory of Open Access Journals (Sweden)

    Yonghao Gu

    2017-01-01

    Full Text Available DDoS attack stream from different agent host converged at victim host will become very large, which will lead to system halt or network congestion. Therefore, it is necessary to propose an effective method to detect the DDoS attack behavior from the massive data stream. In order to solve the problem that large numbers of labeled data are not provided in supervised learning method, and the relatively low detection accuracy and convergence speed of unsupervised k-means algorithm, this paper presents a semisupervised clustering detection method using multiple features. In this detection method, we firstly select three features according to the characteristics of DDoS attacks to form detection feature vector. Then, Multiple-Features-Based Constrained-K-Means (MF-CKM algorithm is proposed based on semisupervised clustering. Finally, using MIT Laboratory Scenario (DDoS 1.0 data set, we verify that the proposed method can improve the convergence speed and accuracy of the algorithm under the condition of using a small amount of labeled data sets.

  20. Deep Spatial-Temporal Joint Feature Representation for Video Object Detection

    Directory of Open Access Journals (Sweden)

    Baojun Zhao

    2018-03-01

    Full Text Available With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP.

  1. Deep Spatial-Temporal Joint Feature Representation for Video Object Detection.

    Science.gov (United States)

    Zhao, Baojun; Zhao, Boya; Tang, Linbo; Han, Yuqi; Wang, Wenzheng

    2018-03-04

    With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).

  2. Vehicle parts detection based on Faster - RCNN with location constraints of vehicle parts feature point

    Science.gov (United States)

    Yang, Liqin; Sang, Nong; Gao, Changxin

    2018-03-01

    Vehicle parts detection plays an important role in public transportation safety and mobility. The detection of vehicle parts is to detect the position of each vehicle part. We propose a new approach by combining Faster RCNN and three level cascaded convolutional neural network (DCNN). The output of Faster RCNN is a series of bounding boxes with coordinate information, from which we can locate vehicle parts. DCNN can precisely predict feature point position, which is the center of vehicle part. We design an output strategy by combining these two results. There are two advantages for this. The quality of the bounding boxes are greatly improved, which means vehicle parts feature point position can be located more precise. Meanwhile we preserve the position relationship between vehicle parts and effectively improve the validity and reliability of the result. By using our algorithm, the performance of the vehicle parts detection improve obviously compared with Faster RCNN.

  3. Behavioral features recognition and oestrus detection based on fast approximate clustering algorithm in dairy cows

    Science.gov (United States)

    Tian, Fuyang; Cao, Dong; Dong, Xiaoning; Zhao, Xinqiang; Li, Fade; Wang, Zhonghua

    2017-06-01

    Behavioral features recognition was an important effect to detect oestrus and sickness in dairy herds and there is a need for heat detection aid. The detection method was based on the measure of the individual behavioural activity, standing time, and temperature of dairy using vibrational sensor and temperature sensor in this paper. The data of behavioural activity index, standing time, lying time and walking time were sent to computer by lower power consumption wireless communication system. The fast approximate K-means algorithm (FAKM) was proposed to deal the data of the sensor for behavioral features recognition. As a result of technical progress in monitoring cows using computers, automatic oestrus detection has become possible.

  4. Scattering features for lung cancer detection in fibered confocal fluorescence microscopy images.

    Science.gov (United States)

    Rakotomamonjy, Alain; Petitjean, Caroline; Salaün, Mathieu; Thiberville, Luc

    2014-06-01

    To assess the feasibility of lung cancer diagnosis using fibered confocal fluorescence microscopy (FCFM) imaging technique and scattering features for pattern recognition. FCFM imaging technique is a new medical imaging technique for which interest has yet to be established for diagnosis. This paper addresses the problem of lung cancer detection using FCFM images and, as a first contribution, assesses the feasibility of computer-aided diagnosis through these images. Towards this aim, we have built a pattern recognition scheme which involves a feature extraction stage and a classification stage. The second contribution relies on the features used for discrimination. Indeed, we have employed the so-called scattering transform for extracting discriminative features, which are robust to small deformations in the images. We have also compared and combined these features with classical yet powerful features like local binary patterns (LBP) and their variants denoted as local quinary patterns (LQP). We show that scattering features yielded to better recognition performances than classical features like LBP and their LQP variants for the FCFM image classification problems. Another finding is that LBP-based and scattering-based features provide complementary discriminative information and, in some situations, we empirically establish that performance can be improved when jointly using LBP, LQP and scattering features. In this work we analyze the joint capability of FCFM images and scattering features for lung cancer diagnosis. The proposed method achieves a good recognition rate for such a diagnosis problem. It also performs well when used in conjunction with other features for other classical medical imaging classification problems. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. A Local Texture-Based Superpixel Feature Coding for Saliency Detection Combined with Global Saliency

    Directory of Open Access Journals (Sweden)

    Bingfei Nan

    2015-12-01

    Full Text Available Because saliency can be used as the prior knowledge of image content, saliency detection has been an active research area in image segmentation, object detection, image semantic understanding and other relevant image-based applications. In the case of saliency detection from cluster scenes, the salient object/region detected needs to not only be distinguished clearly from the background, but, preferably, to also be informative in terms of complete contour and local texture details to facilitate the successive processing. In this paper, a Local Texture-based Region Sparse Histogram (LTRSH model is proposed for saliency detection from cluster scenes. This model uses a combination of local texture patterns and color distribution as well as contour information to encode the superpixels to characterize the local feature of image for region contrast computing. Combining the region contrast as computed with the global saliency probability, a full-resolution salient map, in which the salient object/region detected adheres more closely to its inherent feature, is obtained on the bases of the corresponding high-level saliency spatial distribution as well as on the pixel-level saliency enhancement. Quantitative comparisons with five state-of-the-art saliency detection methods on benchmark datasets are carried out, and the comparative results show that the method we propose improves the detection performance in terms of corresponding measurements.

  6. Cascaded ensemble of convolutional neural networks and handcrafted features for mitosis detection

    Science.gov (United States)

    Wang, Haibo; Cruz-Roa, Angel; Basavanhally, Ajay; Gilmore, Hannah; Shih, Natalie; Feldman, Mike; Tomaszewski, John; Gonzalez, Fabio; Madabhushi, Anant

    2014-03-01

    Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is mitotic count, which involves quantifying the number of cells in the process of dividing (i.e. undergoing mitosis) at a specific point in time. Currently mitosis counting is done manually by a pathologist looking at multiple high power fields on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical or textural attributes of mitoses or features learned with convolutional neural networks (CNN). While handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely unsupervised feature generation methods, there is an appeal to attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. In this paper, we present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing performance by

  7. Submerged karst landforms observed by multibeam bathymetric survey in Nagura Bay, Ishigaki Island, southwestern Japan

    Science.gov (United States)

    Kan, Hironobu; Urata, Kensaku; Nagao, Masayuki; Hori, Nobuyuki; Fujita, Kazuhiko; Yokoyama, Yusuke; Nakashima, Yosuke; Ohashi, Tomoya; Goto, Kazuhisa; Suzuki, Atsushi

    2015-01-01

    Submerged tropical karst features were discovered in Nagura Bay on Ishigaki Island in the southern Ryukyu Islands, Japan. The coastal seafloor at depths shallower than ~ 130 m has been subjected to repeated and alternating subaerial erosion and sedimentation during periods of Quaternary sea-level lowstands. We conducted a broadband multibeam survey in the central area of Nagura Bay (1.85 × 2.7 km) and visualized the high-resolution bathymetric results over a depth range of 1.6-58.5 m. Various types of humid tropical karst landforms were found to coexist within the bay, including fluviokarst, doline karst, cockpit karst, polygonal karst, uvalas, and mega-dolines. Although these submerged karst landforms are covered by thick postglacial reef and reef sediments, their shapes and sizes are distinct from those associated with coral reef geomorphology. The submerged landscape of Nagura Bay likely formed during multiple glacial and interglacial periods. According to our bathymetric results and the aerial photographs of the coastal area, this submerged karst landscape appears to have developed throughout Nagura Bay (i.e., over an area of approximately 6 × 5 km) and represents the largest submerged karst in Japan.

  8. A HYBRID FILTER AND WRAPPER FEATURE SELECTION APPROACH FOR DETECTING CONTAMINATION IN DRINKING WATER MANAGEMENT SYSTEM

    Directory of Open Access Journals (Sweden)

    S. VISALAKSHI

    2017-07-01

    Full Text Available Feature selection is an important task in predictive models which helps to identify the irrelevant features in the high - dimensional dataset. In this case of water contamination detection dataset, the standard wrapper algorithm alone cannot be applied because of the complexity. To overcome this computational complexity problem and making it lighter, filter-wrapper based algorithm has been proposed. In this work, reducing the feature space is a significant component of water contamination. The main findings are as follows: (1 The main goal is speeding up the feature selection process, so the proposed filter - based feature pre-selection is applied and guarantees that useful data are improbable to be detached in the initial stage which discussed briefly in this paper. (2 The resulting features are again filtered by using the Genetic Algorithm coded with Support Vector Machine method, where it facilitates to nutshell the subset of features with high accuracy and decreases the expense. Experimental results show that the proposed methods trim down redundant features effectively and achieved better classification accuracy.

  9. Using activity-related behavioural features towards more effective automatic stress detection.

    Directory of Open Access Journals (Sweden)

    Dimitris Giakoumis

    Full Text Available This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing.

  10. Cue combination in a combined feature contrast detection and figure identification task.

    Science.gov (United States)

    Meinhardt, Günter; Persike, Malte; Mesenholl, Björn; Hagemann, Cordula

    2006-11-01

    Target figures defined by feature contrast in spatial frequency, orientation or both cues had to be detected in Gabor random fields and their shape had to be identified in a dual task paradigm. Performance improved with increasing feature contrast and was strongly correlated among both tasks. Subjects performed significantly better with combined cues than with single cues. The improvement due to cue summation was stronger than predicted by the assumption of independent feature specific mechanisms, and increased with the performance level achieved with single cues until it was limited by ceiling effects. Further, cue summation was also strongly correlated among tasks: when there was benefit due to the additional cue in feature contrast detection, there was also benefit in figure identification. For the same performance level achieved with single cues, cue summation was generally larger in figure identification than in feature contrast detection, indicating more benefit when processes of shape and surface formation are involved. Our results suggest that cue combination improves spatial form completion and figure-ground segregation in noisy environments, and therefore leads to more stable object vision.

  11. A review of feature detection and match algorithms for localization and mapping

    Science.gov (United States)

    Li, Shimiao

    2017-09-01

    Localization and mapping is an essential ability of a robot to keep track of its own location in an unknown environment. Among existing methods for this purpose, vision-based methods are more effective solutions for being accurate, inexpensive and versatile. Vision-based methods can generally be categorized as feature-based approaches and appearance-based approaches. The feature-based approaches prove higher performance in textured scenarios. However, their performance depend highly on the applied feature-detection algorithms. In this paper, we surveyed algorithms for feature detection, which is an essential step in achieving vision-based localization and mapping. In this pater, we present mathematical models of the algorithms one after another. To compare the performances of the algorithms, we conducted a series of experiments on their accuracy, speed, scale invariance and rotation invariance. The results of the experiments showed that ORB is the fastest algorithm in detecting and matching features, the speed of which is more than 10 times that of SURF and approximately 40 times that of SIFT. And SIFT, although with no advantage in terms of speed, shows the most correct matching pairs and proves its accuracy.

  12. Comparing experts and novices in Martian surface feature change detection and identification

    Science.gov (United States)

    Wardlaw, Jessica; Sprinks, James; Houghton, Robert; Muller, Jan-Peter; Sidiropoulos, Panagiotis; Bamford, Steven; Marsh, Stuart

    2018-02-01

    Change detection in satellite images is a key concern of the Earth Observation field for environmental and climate change monitoring. Satellite images also provide important clues to both the past and present surface conditions of other planets, which cannot be validated on the ground. With the volume of satellite imagery continuing to grow, the inadequacy of computerised solutions to manage and process imagery to the required professional standard is of critical concern. Whilst studies find the crowd sourcing approach suitable for the counting of impact craters in single images, images of higher resolution contain a much wider range of features, and the performance of novices in identifying more complex features and detecting change, remains unknown. This paper presents a first step towards understanding whether novices can identify and annotate changes in different geomorphological features. A website was developed to enable visitors to flick between two images of the same location on Mars taken at different times and classify 1) if a surface feature changed and if so, 2) what feature had changed from a pre-defined list of six. Planetary scientists provided ;expert; data against which classifications made by novices could be compared when the project subsequently went public. Whilst no significant difference was found in images identified with surface changes by expert and novices, results exhibited differences in consensus within and between experts and novices when asked to classify the type of change. Experts demonstrated higher levels of agreement in classification of changes as dust devil tracks, slope streaks and impact craters than other features, whilst the consensus of novices was consistent across feature types; furthermore, the level of consensus amongst regardless of feature type. These trends are secondary to the low levels of consensus found, regardless of feature type or classifier expertise. These findings demand the attention of researchers who

  13. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  14. Ship detection in South African oceans using SAR, CFAR and a Haar-like feature classifier

    CSIR Research Space (South Africa)

    Schwegmann, CP

    2014-07-01

    Full Text Available -1 2014 IEEE Internatonal Geoscience and Remote Sensing Symposium (IGARSS), Quebec Canada, 13-18 July 2014 SHIP DETECTION IN SOUTH AFRICAN OCEANS USING SAR, CFAR AND A HAAR-LIKE FEATURE CLASSIFIER yzC. P. Schwegmann,yzW. Kleynhans,?zB. P. Salmon...

  15. Fast detection of vascular plaque in optical coherence tomography images using a reduced feature set

    Science.gov (United States)

    Prakash, Ammu; Ocana Macias, Mariano; Hewko, Mark; Sowa, Michael; Sherif, Sherif

    2018-03-01

    Optical coherence tomography (OCT) images are capable of detecting vascular plaque by using the full set of 26 Haralick textural features and a standard K-means clustering algorithm. However, the use of the full set of 26 textural features is computationally expensive and may not be feasible for real time implementation. In this work, we identified a reduced set of 3 textural feature which characterizes vascular plaque and used a generalized Fuzzy C-means clustering algorithm. Our work involves three steps: 1) the reduction of a full set 26 textural feature to a reduced set of 3 textural features by using genetic algorithm (GA) optimization method 2) the implementation of an unsupervised generalized clustering algorithm (Fuzzy C-means) on the reduced feature space, and 3) the validation of our results using histology and actual photographic images of vascular plaque. Our results show an excellent match with histology and actual photographic images of vascular tissue. Therefore, our results could provide an efficient pre-clinical tool for the detection of vascular plaque in real time OCT imaging.

  16. Hybrid image representation learning model with invariant features for basal cell carcinoma detection

    Science.gov (United States)

    Arevalo, John; Cruz-Roa, Angel; González, Fabio A.

    2013-11-01

    This paper presents a novel method for basal-cell carcinoma detection, which combines state-of-the-art methods for unsupervised feature learning (UFL) and bag of features (BOF) representation. BOF, which is a form of representation learning, has shown a good performance in automatic histopathology image classi cation. In BOF, patches are usually represented using descriptors such as SIFT and DCT. We propose to use UFL to learn the patch representation itself. This is accomplished by applying a topographic UFL method (T-RICA), which automatically learns visual invariance properties of color, scale and rotation from an image collection. These learned features also reveals these visual properties associated to cancerous and healthy tissues and improves carcinoma detection results by 7% with respect to traditional autoencoders, and 6% with respect to standard DCT representations obtaining in average 92% in terms of F-score and 93% of balanced accuracy.

  17. A Research on Fast Face Feature Points Detection on Smart Mobile Devices

    Directory of Open Access Journals (Sweden)

    Xiaohe Li

    2018-01-01

    Full Text Available We explore how to leverage the performance of face feature points detection on mobile terminals from 3 aspects. First, we optimize the models used in SDM algorithms via PCA and Spectrum Clustering. Second, we propose an evaluation criterion using Linear Discriminative Analysis to choose the best local feature descriptions which plays a critical role in feature points detection. Third, we take advantage of multicore architecture of mobile terminal and parallelize the optimized SDM algorithm to improve the efficiency further. The experiment observations show that our final accomplished GPC-SDM (improved Supervised Descent Method using spectrum clustering, PCA, and GPU acceleration suppresses the memory usage, which is beneficial and efficient to meet the real-time requirements.

  18. Bathymetric Contour Maps of Lakes Surveyed in Iowa in 2005

    Science.gov (United States)

    Linhart, S.M.; Lund, K.D.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Iowa Department of Natural Resources, conducted bathymetric surveys on seven lakes in Iowa during 2005 (Arrowhead Pond, Central Park Lake, Lake Keomah, Manteno Park Pond, Lake Miami, Springbrook Lake, and Yellow Smoke Lake). The surveys were conducted to provide the Iowa Department of Natural Resources with information for the development of total maximum daily load limits, particularly for estimating sediment load and deposition rates. The bathymetric surveys provide a baseline for future work on sediment loads and deposition rates for these lakes. All of the lakes surveyed in 2005 are man-made lakes with fixed spillways. Bathymetric data were collected using boat-mounted, differential global positioning system, echo depth-sounding equipment, and computer software. Data were processed with commercial hydrographic software and exported into a geographic information system for mapping and calculating area and volume. Lake volume estimates ranged from 47,784,000 cubic feet (1,100 acre-feet) at Lake Miami to 2,595,000 cubic feet (60 acre-feet) at Manteno Park Pond. Surface area estimates ranged from 5,454,000 square feet (125 acres) at Lake Miami to 558,000 square feet (13 acres) at Springbrook Lake.

  19. GANN: Genetic algorithm neural networks for the detection of conserved combinations of features in DNA

    Directory of Open Access Journals (Sweden)

    Beiko Robert G

    2005-02-01

    Full Text Available Abstract Background The multitude of motif detection algorithms developed to date have largely focused on the detection of patterns in primary sequence. Since sequence-dependent DNA structure and flexibility may also play a role in protein-DNA interactions, the simultaneous exploration of sequence- and structure-based hypotheses about the composition of binding sites and the ordering of features in a regulatory region should be considered as well. The consideration of structural features requires the development of new detection tools that can deal with data types other than primary sequence. Results GANN (available at http://bioinformatics.org.au/gann is a machine learning tool for the detection of conserved features in DNA. The software suite contains programs to extract different regions of genomic DNA from flat files and convert these sequences to indices that reflect sequence and structural composition or the presence of specific protein binding sites. The machine learning component allows the classification of different types of sequences based on subsamples of these indices, and can identify the best combinations of indices and machine learning architecture for sequence discrimination. Another key feature of GANN is the replicated splitting of data into training and test sets, and the implementation of negative controls. In validation experiments, GANN successfully merged important sequence and structural features to yield good predictive models for synthetic and real regulatory regions. Conclusion GANN is a flexible tool that can search through large sets of sequence and structural feature combinations to identify those that best characterize a set of sequences.

  20. The impact of signal normalization on seizure detection using line length features.

    Science.gov (United States)

    Logesparan, Lojini; Rodriguez-Villegas, Esther; Casson, Alexander J

    2015-10-01

    Accurate automated seizure detection remains a desirable but elusive target for many neural monitoring systems. While much attention has been given to the different feature extractions that can be used to highlight seizure activity in the EEG, very little formal attention has been given to the normalization that these features are routinely paired with. This normalization is essential in patient-independent algorithms to correct for broad-level differences in the EEG amplitude between people, and in patient-dependent algorithms to correct for amplitude variations over time. It is crucial, however, that the normalization used does not have a detrimental effect on the seizure detection process. This paper presents the first formal investigation into the impact of signal normalization techniques on seizure discrimination performance when using the line length feature to emphasize seizure activity. Comparing five normalization methods, based upon the mean, median, standard deviation, signal peak and signal range, we demonstrate differences in seizure detection accuracy (assessed as the area under a sensitivity-specificity ROC curve) of up to 52 %. This is despite the same analysis feature being used in all cases. Further, changes in performance of up to 22 % are present depending on whether the normalization is applied to the raw EEG itself or directly to the line length feature. Our results highlight the median decaying memory as the best current approach for providing normalization when using line length features, and they quantify the under-appreciated challenge of providing signal normalization that does not impair seizure detection algorithm performance.

  1. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    Science.gov (United States)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  2. Using Temporal Covariance of Motion and Geometric Features via Boosting for Human Fall Detection.

    Science.gov (United States)

    Ali, Syed Farooq; Khan, Reamsha; Mahmood, Arif; Hassan, Malik Tahir; Jeon, And Moongu

    2018-06-12

    Fall induced damages are serious incidences for aged as well as young persons. A real-time automatic and accurate fall detection system can play a vital role in timely medication care which will ultimately help to decrease the damages and complications. In this paper, we propose a fast and more accurate real-time system which can detect people falling in videos captured by surveillance cameras. Novel temporal and spatial variance-based features are proposed which comprise the discriminatory motion, geometric orientation and location of the person. These features are used along with ensemble learning strategy of boosting with J48 and Adaboost classifiers. Experiments have been conducted on publicly available standard datasets including Multiple Cameras Fall ( with 2 classes and 3 classes ) and UR Fall Detection achieving percentage accuracies of 99.2, 99.25 and 99.0, respectively. Comparisons with nine state-of-the-art methods demonstrate the effectiveness of the proposed approach on both datasets.

  3. An Ensemble Method with Integration of Feature Selection and Classifier Selection to Detect the Landslides

    Science.gov (United States)

    Zhongqin, G.; Chen, Y.

    2017-12-01

    Abstract Quickly identify the spatial distribution of landslides automatically is essential for the prevention, mitigation and assessment of the landslide hazard. It's still a challenging job owing to the complicated characteristics and vague boundary of the landslide areas on the image. The high resolution remote sensing image has multi-scales, complex spatial distribution and abundant features, the object-oriented image classification methods can make full use of the above information and thus effectively detect the landslides after the hazard happened. In this research we present a new semi-supervised workflow, taking advantages of recent object-oriented image analysis and machine learning algorithms to quick locate the different origins of landslides of some areas on the southwest part of China. Besides a sequence of image segmentation, feature selection, object classification and error test, this workflow ensemble the feature selection and classifier selection. The feature this study utilized were normalized difference vegetation index (NDVI) change, textural feature derived from the gray level co-occurrence matrices (GLCM), spectral feature and etc. The improvement of this study shows this algorithm significantly removes some redundant feature and the classifiers get fully used. All these improvements lead to a higher accuracy on the determination of the shape of landslides on the high resolution remote sensing image, in particular the flexibility aimed at different kinds of landslides.

  4. Feature Optimize and Classification of EEG Signals: Application to Lie Detection Using KPCA and ELM

    Directory of Open Access Journals (Sweden)

    GAO Junfeng

    2014-04-01

    Full Text Available EEG signals had been widely used to detect liars recent years. To overcome the shortcomings of current signals processing, kernel principal component analysis (KPCA and extreme learning machine (ELM was combined to detect liars. We recorded the EEG signals at Pz from 30 randomly divided guilty and innocent subjects. Each five Probe responses were averaged within subject and then extracted wavelet features. KPCA was employed to select feature subset with deduced dimensions based on initial wavelet features, which was fed into ELM. To date, there is no perfect solution for the number of its hidden nodes (NHN. We used grid searching algorithm to select simultaneously the optimal values of the dimension of feature subset and NHN based on cross- validation method. The best classification mode was decided with the optimal searching values. Experimental results show that for EEG signals from the experiment of lie detection, KPCA_ELM has higher classification accuracy with faster training speed than other widely-used classification modes, which is especially suitable for online EEG signals processing system.

  5. THE EFFECT OF IMAGE ENHANCEMENT METHODS DURING FEATURE DETECTION AND MATCHING OF THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    O. Akcay

    2017-05-01

    Full Text Available A successful image matching is essential to provide an automatic photogrammetric process accurately. Feature detection, extraction and matching algorithms have performed on the high resolution images perfectly. However, images of cameras, which are equipped with low-resolution thermal sensors are problematic with the current algorithms. In this paper, some digital image processing techniques were applied to the low-resolution images taken with Optris PI 450 382 x 288 pixel optical resolution lightweight thermal camera to increase extraction and matching performance. Image enhancement methods that adjust low quality digital thermal images, were used to produce more suitable images for detection and extraction. Three main digital image process techniques: histogram equalization, high pass and low pass filters were considered to increase the signal-to-noise ratio, sharpen image, remove noise, respectively. Later on, the pre-processed images were evaluated using current image detection and feature extraction methods Maximally Stable Extremal Regions (MSER and Speeded Up Robust Features (SURF algorithms. Obtained results showed that some enhancement methods increased number of extracted features and decreased blunder errors during image matching. Consequently, the effects of different pre-process techniques were compared in the paper.

  6. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  7. Matching-range-constrained real-time loop closure detection with CNNs features.

    Science.gov (United States)

    Bai, Dongdong; Wang, Chaoqun; Zhang, Bo; Yi, Xiaodong; Tang, Yuhua

    2016-01-01

    The loop closure detection (LCD) is an essential part of visual simultaneous localization and mapping systems (SLAM). LCD is capable of identifying and compensating the accumulation drift of localization algorithms to produce an consistent map if the loops are checked correctly. Deep convolutional neural networks (CNNs) have outperformed state-of-the-art solutions that use traditional hand-crafted features in many computer vision and pattern recognition applications. After the great success of CNNs, there has been much interest in applying CNNs features to robotic fields such as visual LCD. Some researchers focus on using a pre-trained CNNs model as a method of generating an image representation appropriate for visual loop closure detection in SLAM. However, there are many fundamental differences and challenges involved in character between simple computer vision applications and robotic applications. Firstly, the adjacent images in the dataset of loop closure detection might have more resemblance than the images that form the loop closure. Secondly, real-time performance is one of the most critical demands for robots. In this paper, we focus on making use of the feature generated by CNNs layers to implement LCD in real environment. In order to address the above challenges, we explicitly provide a value to limit the matching range of images to solve the first problem; meanwhile we get better results than state-of-the-art methods and improve the real-time performance using an efficient feature compression method.

  8. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  9. Multivariate anomaly detection for Earth observations: a comparison of algorithms and feature extraction techniques

    Directory of Open Access Journals (Sweden)

    M. Flach

    2017-08-01

    Full Text Available Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach and their combinations (ensembles that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to

  10. a Framework of Change Detection Based on Combined Morphologica Features and Multi-Index Classification

    Science.gov (United States)

    Li, S.; Zhang, S.; Yang, D.

    2017-09-01

    Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.

  11. A FRAMEWORK OF CHANGE DETECTION BASED ON COMBINED MORPHOLOGICA FEATURES AND MULTI-INDEX CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Li

    2017-09-01

    Full Text Available Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI, the differential water index (NDWI are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.

  12. Robust and fast license plate detection based on the fusion of color and edge feature

    Science.gov (United States)

    Cai, De; Shi, Zhonghan; Liu, Jin; Hu, Chuanping; Mei, Lin; Qi, Li

    2014-11-01

    Extracting a license plate is an important stage in automatic vehicle identification. The degradation of images and the computation intense make this task difficult. In this paper, a robust and fast license plate detection based on the fusion of color and edge feature is proposed. Based on the dichromatic reflection model, two new color ratios computed from the RGB color model are introduced and proved to be two color invariants. The global color feature extracted by the new color invariants improves the method's robustness. The local Sobel edge feature guarantees the method's accuracy. In the experiment, the detection performance is good. The detection results show that this paper's method is robust to the illumination, object geometry and the disturbance around the license plates. The method can also detect license plates when the color of the car body is the same as the color of the plates. The processing time for image size of 1000x1000 by pixels is nearly 0.2s. Based on the comparison, the performance of the new ratios is comparable to the common used HSI color model.

  13. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    Science.gov (United States)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2015-08-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar HydroStar 4300 and GPS devices; a Ashtech ProMark 500 base, and a Thales Z-Max® rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: (a) to compare the efficiency of 14 different interpolation methods and discover the most appropriate interpolators for the development of a raster model; (b) to calculate the surface area and volume of Lake Vrana, and (c) to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was multiquadric RBF (radio basis function), and the best geostatistical method was ordinary cokriging. The root mean square error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in two phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  14. Computerized detection of diffuse lung disease in MDCT: the usefulness of statistical texture features

    International Nuclear Information System (INIS)

    Wang Jiahui; Li Qiang; Li Feng; Doi Kunio

    2009-01-01

    Accurate detection of diffuse lung disease is an important step for computerized diagnosis and quantification of this disease. It is also a difficult clinical task for radiologists. We developed a computerized scheme to assist radiologists in the detection of diffuse lung disease in multi-detector computed tomography (CT). Two radiologists selected 31 normal and 37 abnormal CT scans with ground glass opacity, reticular, honeycombing and nodular disease patterns based on clinical reports. The abnormal cases in our database must contain at least an abnormal area with a severity of moderate or severe level that was subjectively rated by the radiologists. Because statistical texture features may lack the power to distinguish a nodular pattern from a normal pattern, the abnormal cases that contain only a nodular pattern were excluded. The areas that included specific abnormal patterns in the selected CT images were then delineated as reference standards by an expert chest radiologist. The lungs were first segmented in each slice by use of a thresholding technique, and then divided into contiguous volumes of interest (VOIs) with a 64 x 64 x 64 matrix size. For each VOI, we determined and employed statistical texture features, such as run-length and co-occurrence matrix features, to distinguish abnormal from normal lung parenchyma. In particular, we developed new run-length texture features with clear physical meanings to considerably improve the accuracy of our detection scheme. A quadratic classifier was employed for distinguishing between normal and abnormal VOIs by the use of a leave-one-case-out validation scheme. A rule-based criterion was employed to further determine whether a case was normal or abnormal. We investigated the impact of new and conventional texture features, VOI size and the dimensionality for regions of interest on detecting diffuse lung disease. When we employed new texture features for 3D VOIs of 64 x 64 x 64 voxels, our system achieved the

  15. Game Theoretic Approach for Systematic Feature Selection; Application in False Alarm Detection in Intensive Care Units

    Directory of Open Access Journals (Sweden)

    Fatemeh Afghah

    2018-03-01

    Full Text Available Intensive Care Units (ICUs are equipped with many sophisticated sensors and monitoring devices to provide the highest quality of care for critically ill patients. However, these devices might generate false alarms that reduce standard of care and result in desensitization of caregivers to alarms. Therefore, reducing the number of false alarms is of great importance. Many approaches such as signal processing and machine learning, and designing more accurate sensors have been developed for this purpose. However, the significant intrinsic correlation among the extracted features from different sensors has been mostly overlooked. A majority of current data mining techniques fail to capture such correlation among the collected signals from different sensors that limits their alarm recognition capabilities. Here, we propose a novel information-theoretic predictive modeling technique based on the idea of coalition game theory to enhance the accuracy of false alarm detection in ICUs by accounting for the synergistic power of signal attributes in the feature selection stage. This approach brings together techniques from information theory and game theory to account for inter-features mutual information in determining the most correlated predictors with respect to false alarm by calculating Banzhaf power of each feature. The numerical results show that the proposed method can enhance classification accuracy and improve the area under the ROC (receiver operating characteristic curve compared to other feature selection techniques, when integrated in classifiers such as Bayes-Net that consider inter-features dependencies.

  16. Flying control of small-type helicopter by detecting its in-air natural features

    Directory of Open Access Journals (Sweden)

    Chinthaka Premachandra

    2015-05-01

    Full Text Available Control of a small type helicopter is an interesting research area in unmanned aerial vehicle development. This study aims to detect a more typical helicopter unequipped with markers as a means by which to resolve the various issues of the prior studies. Accordingly, we propose a method of detecting the helicopter location and pose through using an infrastructure camera to recognize its in-air natural features such as ellipse traced by the rotation of the helicopter's propellers. A single-rotor system helicopter was used as the controlled airframe in our experiments. Here, helicopter location is measured by detecting the main rotor ellipse center and pose is measured following relationship between the main rotor ellipse and the tail rotor ellipse. Following these detection results we confirmed the hovering control possibility of the helicopter through experiments.

  17. Nodule detection methods using autocorrelation features on 3D chest CT scans

    International Nuclear Information System (INIS)

    Hara, T.; Zhou, X.; Okura, S.; Fujita, H.; Kiryu, T.; Hoshi, H.

    2007-01-01

    Lung cancer screening using low dose X-ray CT scan has been an acceptable examination to detect cancers at early stage. We have been developing an automated detection scheme for lung nodules on CT scan by using second-order autocorrelation features and the initial performance for small nodules (< 10 mm) shows a high true-positive rate with less than four false-positive marks per case. In this study, an open database of lung images, LIDC (Lung Image Database Consortium), was employed to evaluate our detection scheme as an consistency test. The detection performance for solid and solitary nodules in LIDC, included in the first data set opened by the consortium, was 83% (10/12) true-positive rate with 3.3 false-positive marks per case. (orig.)

  18. Fault detection of Tennessee Eastman process based on topological features and SVM

    Science.gov (United States)

    Zhao, Huiyang; Hu, Yanzhu; Ai, Xinbo; Hu, Yu; Meng, Zhen

    2018-03-01

    Fault detection in industrial process is a popular research topic. Although the distributed control system(DCS) has been introduced to monitor the state of industrial process, it still cannot satisfy all the requirements for fault detection of all the industrial systems. In this paper, we proposed a novel method based on topological features and support vector machine(SVM), for fault detection of industrial process. The proposed method takes global information of measured variables into account by complex network model and predicts whether a system has generated some faults or not by SVM. The proposed method can be divided into four steps, i.e. network construction, network analysis, model training and model testing respectively. Finally, we apply the model to Tennessee Eastman process(TEP). The results show that this method works well and can be a useful supplement for fault detection of industrial process.

  19. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  20. Part-based Pedestrian Detection and Feature-based Tracking for Driver Assistance

    DEFF Research Database (Denmark)

    Prioletti, Antonio; Møgelmose, Andreas; Grislieri, Paolo

    2013-01-01

    Detecting pedestrians is still a challenging task for automotive vision systems due to the extreme variability of targets, lighting conditions, occlusion, and high-speed vehicle motion. Much research has been focused on this problem in the last ten years and detectors based on classifiers have...... on a prototype vehicle and offers high performance in terms of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system relies on the combination of a HOG part-based approach, tracking based on a specific optimized feature, and porting on a real prototype....

  1. Aircraft Detection from VHR Images Based on Circle-Frequency Filter and Multilevel Features

    Directory of Open Access Journals (Sweden)

    Feng Gao

    2013-01-01

    Full Text Available Aircraft automatic detection from very high-resolution (VHR images plays an important role in a wide variety of applications. This paper proposes a novel detector for aircraft detection from very high-resolution (VHR remote sensing images. To accurately distinguish aircrafts from background, a circle-frequency filter (CF-filter is used to extract the candidate locations of aircrafts from a large size image. A multi-level feature model is then employed to represent both local appearance and spatial layout of aircrafts by means of Robust Hue Descriptor and Histogram of Oriented Gradients. The experimental results demonstrate the superior performance of the proposed method.

  2. Epileptic MEG Spike Detection Using Statistical Features and Genetic Programming with KNN

    Directory of Open Access Journals (Sweden)

    Turky N. Alotaiby

    2017-01-01

    Full Text Available Epilepsy is a neurological disorder that affects millions of people worldwide. Monitoring the brain activities and identifying the seizure source which starts with spike detection are important steps for epilepsy treatment. Magnetoencephalography (MEG is an emerging epileptic diagnostic tool with high-density sensors; this makes manual analysis a challenging task due to the vast amount of MEG data. This paper explores the use of eight statistical features and genetic programing (GP with the K-nearest neighbor (KNN for interictal spike detection. The proposed method is comprised of three stages: preprocessing, genetic programming-based feature generation, and classification. The effectiveness of the proposed approach has been evaluated using real MEG data obtained from 28 epileptic patients. It has achieved a 91.75% average sensitivity and 92.99% average specificity.

  3. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...... even in high-dimensional space. In addition, the latent connection between Rényi quadratic entropy and the mapping data in kernel feature space further facilitates us to capture the geometric structure as well as the information about the underlying labels of the CKD using CSQMI. Thus the resulting...... codebook and reduced CKD are discriminative. We report superior performance of our algorithm for object recognition on benchmark datasets like Caltech-101 and CIFAR-10, as well as for detection on a challenging chicken feet dataset....

  4. A change detection method for remote sensing image based on LBP and SURF feature

    Science.gov (United States)

    Hu, Lei; Yang, Hao; Li, Jin; Zhang, Yun

    2018-04-01

    Finding the change in multi-temporal remote sensing image is important in many the image application. Because of the infection of climate and illumination, the texture of the ground object is more stable relative to the gray in high-resolution remote sensing image. And the texture features of Local Binary Patterns (LBP) and Speeded Up Robust Features (SURF) are outstanding in extracting speed and illumination invariance. A method of change detection for matched remote sensing image pair is present, which compares the similarity by LBP and SURF to detect the change and unchanged of the block after blocking the image. And region growing is adopted to process the block edge zone. The experiment results show that the method can endure some illumination change and slight texture change of the ground object.

  5. Modeling and Detecting Feature Interactions among Integrated Services of Home Network Systems

    Science.gov (United States)

    Igaki, Hiroshi; Nakamura, Masahide

    This paper presents a framework for formalizing and detecting feature interactions (FIs) in the emerging smart home domain. We first establish a model of home network system (HNS), where every networked appliance (or the HNS environment) is characterized as an object consisting of properties and methods. Then, every HNS service is defined as a sequence of method invocations of the appliances. Within the model, we next formalize two kinds of FIs: (a) appliance interactions and (b) environment interactions. An appliance interaction occurs when two method invocations conflict on the same appliance, whereas an environment interaction arises when two method invocations conflict indirectly via the environment. Finally, we propose offline and online methods that detect FIs before service deployment and during execution, respectively. Through a case study with seven practical services, it is shown that the proposed framework is generic enough to capture feature interactions in HNS integrated services. We also discuss several FI resolution schemes within the proposed framework.

  6. Bilateral symmetry detection on the basis of Scale Invariant Feature Transform.

    Directory of Open Access Journals (Sweden)

    Habib Akbar

    Full Text Available The automatic detection of bilateral symmetry is a challenging task in computer vision and pattern recognition. This paper presents an approach for the detection of bilateral symmetry in digital single object images. Our method relies on the extraction of Scale Invariant Feature Transform (SIFT based feature points, which serves as the basis for the ascertainment of the centroid of the object; the latter being the origin under the Cartesian coordinate system to be converted to the polar coordinate system in order to facilitate the selection symmetric coordinate pairs. This is followed by comparing the gradient magnitude and orientation of the corresponding points to evaluate the amount of symmetry exhibited by each pair of points. The experimental results show that our approach draw the symmetry line accurately, provided that the observed centroid point is true.

  7. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Olivier Aycard

    2004-12-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T-intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  8. Digital Image Forgery Detection Using JPEG Features and Local Noise Discrepancies

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Wide availability of image processing software makes counterfeiting become an easy and low-cost way to distort or conceal facts. Driven by great needs for valid forensic technique, many methods have been proposed to expose such forgeries. In this paper, we proposed an integrated algorithm which was able to detect two commonly used fraud practices: copy-move and splicing forgery in digital picture. To achieve this target, a special descriptor for each block was created combining the feature from JPEG block artificial grid with that from noise estimation. And forehand image quality assessment procedure reconciled these different features by setting proper weights. Experimental results showed that, compared to existing algorithms, our proposed method is effective on detecting both copy-move and splicing forgery regardless of JPEG compression ratio of the input image.

  9. Incidental breast masses detected by computed tomography: are any imaging features predictive of malignancy?

    Energy Technology Data Exchange (ETDEWEB)

    Porter, G. [Primrose Breast Care Unit, Derriford Hospital, Plymouth (United Kingdom)], E-mail: Gareth.Porter@phnt.swest.nhs.uk; Steel, J.; Paisley, K.; Watkins, R. [Primrose Breast Care Unit, Derriford Hospital, Plymouth (United Kingdom); Holgate, C. [Department of Histopathology, Derriford Hospital, Plymouth (United Kingdom)

    2009-05-15

    Aim: To review the outcome of further assessment of breast abnormalities detected incidentally by multidetector computed tomography (MDCT) and to determine whether any MDCT imaging features were predictive of malignancy. Material and methods: The outcome of 34 patients referred to the Primrose Breast Care Unit with breast abnormalities detected incidentally using MDCT was prospectively recorded. Women with a known diagnosis of breast cancer were excluded. CT imaging features and histological diagnoses were recorded and the correlation assessed using Fisher's exact test. Results: Of the 34 referred patients a malignant diagnosis was noted in 11 (32%). There were 10 breast malignancies (seven invasive ductal carcinomas, one invasive lobular carcinoma, two metastatic lesions) and one axillary lymphoma. CT features suggestive of breast malignancy were spiculation [6/10 (60%) versus 0/24 (0%) p = 0.0002] and associated axillary lymphadenopathy [3/10 (33%) versus 0/20 (0%) p = 0.030]. Conversely, a well-defined mass was suggestive of benign disease [10/24 (42%) versus 0/10 (0%); p = 0.015]. Associated calcification, ill-definition, heterogeneity, size, and multiplicity of lesions were not useful discriminating CT features. There was a non-significant trend for lesions in involuted breasts to be more frequently malignant than in dense breasts [6/14 (43%) versus 4/20 (20%) p = 0.11]. Conclusion: In the present series there was a significant rate (32%) of malignancy in patients referred to the breast clinic with CT-detected incidental breast lesions. The CT features of spiculation or axillary lymphadenopathy are strongly suggestive of malignancy.

  10. Automatic detection of diabetic retinopathy features in ultra-wide field retinal images

    Science.gov (United States)

    Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur

    2017-03-01

    Diabetic retinopathy (DR) is a major cause of irreversible vision loss. DR screening relies on retinal clinical signs (features). Opportunities for computer-aided DR feature detection have emerged with the development of Ultra-WideField (UWF) digital scanning laser technology. UWF imaging covers 82% greater retinal area (200°), against 45° in conventional cameras3 , allowing more clinically relevant retinopathy to be detected4 . UWF images also provide a high resolution of 3078 x 2702 pixels. Currently DR screening uses 7 overlapping conventional fundus images, and the UWF images provide similar results1,4. However, in 40% of cases, more retinopathy was found outside the 7-field ETDRS) fields by UWF and in 10% of cases, retinopathy was reclassified as more severe4 . This is because UWF imaging allows examination of both the central retina and more peripheral regions, with the latter implicated in DR6 . We have developed an algorithm for automatic recognition of DR features, including bright (cotton wool spots and exudates) and dark lesions (microaneurysms and blot, dot and flame haemorrhages) in UWF images. The algorithm extracts features from grayscale (green "red-free" laser light) and colour-composite UWF images, including intensity, Histogram-of-Gradient and Local binary patterns. Pixel-based classification is performed with three different classifiers. The main contribution is the automatic detection of DR features in the peripheral retina. The method is evaluated by leave-one-out cross-validation on 25 UWF retinal images with 167 bright lesions, and 61 other images with 1089 dark lesions. The SVM classifier performs best with AUC of 94.4% / 95.31% for bright / dark lesions.

  11. Sleep Spindle Detection and Prediction Using a Mixture of Time Series and Chaotic Features

    Directory of Open Access Journals (Sweden)

    Amin Hekmatmanesh

    2017-01-01

    Full Text Available It is well established that sleep spindles (bursts of oscillatory brain electrical activity are significant indicators of learning, memory and some disease states. Therefore, many attempts have been made to detect these hallmark patterns automatically. In this pilot investigation, we paid special attention to nonlinear chaotic features of EEG signals (in combination with linear features to investigate the detection and prediction of sleep spindles. These nonlinear features included: Higuchi's, Katz's and Sevcik's Fractal Dimensions, as well as the Largest Lyapunov Exponent and Kolmogorov's Entropy. It was shown that the intensity map of various nonlinear features derived from the constructive interference of spindle signals could improve the detection of the sleep spindles. It was also observed that the prediction of sleep spindles could be facilitated by means of the analysis of these maps. Two well-known classifiers, namely the Multi-Layer Perceptron (MLP and the K-Nearest Neighbor (KNN were used to distinguish between spindle and non-spindle patterns. The MLP classifier produced a~high discriminative capacity (accuracy = 94.93%, sensitivity = 94.31% and specificity = 95.28% with significant robustness (accuracy ranging from 91.33% to 94.93%, sensitivity varying from 91.20% to 94.31%, and specificity extending from 89.79% to 95.28% in separating spindles from non-spindles. This classifier also generated the best results in predicting sleep spindles based on chaotic features. In addition, the MLP was used to find out the best time window for predicting the sleep spindles, with the experimental results reaching 97.96% accuracy.

  12. Improved Feature Detection in Fused Intensity-Range Images with Complex SIFT (ℂSIFT

    Directory of Open Access Journals (Sweden)

    Boris Jutzi

    2011-09-01

    Full Text Available The real and imaginary parts are proposed as an alternative to the usual Polar representation of complex-valued images. It is proven that the transformation from Polar to Cartesian representation contributes to decreased mutual information, and hence to greater distinctiveness. The Complex Scale-Invariant Feature Transform (ℂSIFT detects distinctive features in complex-valued images. An evaluation method for estimating the uniformity of feature distributions in complex-valued images derived from intensity-range images is proposed. In order to experimentally evaluate the proposed methodology on intensity-range images, three different kinds of active sensing systems were used: Range Imaging, Laser Scanning, and Structured Light Projection devices (PMD CamCube 2.0, Z+F IMAGER 5003, Microsoft Kinect.

  13. Spatial-temporal features of thermal images for Carpal Tunnel Syndrome detection

    Science.gov (United States)

    Estupinan Roldan, Kevin; Ortega Piedrahita, Marco A.; Benitez, Hernan D.

    2014-02-01

    Disorders associated with repeated trauma account for about 60% of all occupational illnesses, Carpal Tunnel Syndrome (CTS) being the most consulted today. Infrared Thermography (IT) has come to play an important role in the field of medicine. IT is non-invasive and detects diseases based on measuring temperature variations. IT represents a possible alternative to prevalent methods for diagnosis of CTS (i.e. nerve conduction studies and electromiography). This work presents a set of spatial-temporal features extracted from thermal images taken in healthy and ill patients. Support Vector Machine (SVM) classifiers test this feature space with Leave One Out (LOO) validation error. The results of the proposed approach show linear separability and lower validation errors when compared to features used in previous works that do not account for temperature spatial variability.

  14. CoMIC: Good features for detection and matching at object boundaries

    OpenAIRE

    Ravindran, Swarna Kamlam; Mittal, Anurag

    2014-01-01

    Feature or interest points typically use information aggregation in 2D patches which does not remain stable at object boundaries when there is object motion against a significantly varying background. Level or iso-intensity curves are much more stable under such conditions, especially the longer ones. In this paper, we identify stable portions on long iso-curves and detect corners on them. Further, the iso-curve associated with a corner is used to discard portions from the background and impr...

  15. Shape based automated detection of pulmonary nodules with surface feature based false positive reduction

    International Nuclear Information System (INIS)

    Nomura, Y.; Itoh, H.; Masutani, Y.; Ohtomo, K.; Maeda, E.; Yoshikawa, T.; Hayashi, N.

    2007-01-01

    We proposed a shape based automated detection of pulmonary nodules with surface feature based false positive (FP) reduction. In the proposed system, the FP existing in internal of vessel bifurcation is removed using extracted surface of vessels and nodules. From the validation with 16 chest CT scans, we find that the proposed CAD system achieves 18.7 FPs/scan at 90% sensitivity, and 7.8 FPs/scan at 80% sensitivity. (orig.)

  16. Comparison of feature extraction methods within a spatio-temporal land cover change detection framework

    CSIR Research Space (South Africa)

    Kleynhans, W

    2011-07-01

    Full Text Available OF FEATURE EXTRACTION METHODS WITHIN A SPATIO-TEMPORAL LAND COVER CHANGE DETECTION FRAMEWORK ??W. Kleynhans,, ??B.P. Salmon, ?J.C. Olivier, ?K.J. Wessels, ?F. van den Bergh ? Electrical, Electronic and Computer Engi- neering University of Pretoria, South... Bergh, and K. Steenkamp, ?Improving land cover class separation using an extended Kalman filter on MODIS NDVI time series data,? IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 2, pp. 381?385, Apr. 2010. ...

  17. Detection of braking intention in diverse situations during simulated driving based on EEG feature combination.

    Science.gov (United States)

    Kim, Il-Hwa; Kim, Jeong-Woo; Haufe, Stefan; Lee, Seong-Whan

    2015-02-01

    We developed a simulated driving environment for studying neural correlates of emergency braking in diversified driving situations. We further investigated to what extent these neural correlates can be used to detect a participant's braking intention prior to the behavioral response. We measured electroencephalographic (EEG) and electromyographic signals during simulated driving. Fifteen participants drove a virtual vehicle and were exposed to several kinds of traffic situations in a simulator system, while EEG signals were measured. After that, we extracted characteristic features to categorize whether the driver intended to brake or not. Our system shows excellent detection performance in a broad range of possible emergency situations. In particular, we were able to distinguish three different kinds of emergency situations (sudden stop of a preceding vehicle, sudden cutting-in of a vehicle from the side and unexpected appearance of a pedestrian) from non-emergency (soft) braking situations, as well as from situations in which no braking was required, but the sensory stimulation was similar to stimulations inducing an emergency situation (e.g., the sudden stop of a vehicle on a neighboring lane). We proposed a novel feature combination comprising movement-related potentials such as the readiness potential, event-related desynchronization features besides the event-related potentials (ERP) features used in a previous study. The performance of predicting braking intention based on our proposed feature combination was superior compared to using only ERP features. Our study suggests that emergency situations are characterized by specific neural patterns of sensory perception and processing, as well as motor preparation and execution, which can be utilized by neurotechnology based braking assistance systems.

  18. Using cell nuclei features to detect colon cancer tissue in hematoxylin and eosin stained slides.

    Science.gov (United States)

    Jørgensen, Alex Skovsbo; Rasmussen, Anders Munk; Andersen, Niels Kristian Mäkinen; Andersen, Simon Kragh; Emborg, Jonas; Røge, Rasmus; Østergaard, Lasse Riis

    2017-08-01

    Currently, diagnosis of colon cancer is based on manual examination of histopathological images by a pathologist. This can be time consuming and interpretation of the images is subject to inter- and intra-observer variability. This may be improved by introducing a computer-aided diagnosis (CAD) system for automatic detection of cancer tissue within whole slide hematoxylin and eosin (H&E) stains. Cancer disrupts the normal control mechanisms of cell proliferation and differentiation, affecting the structure and appearance of the cells. Therefore, extracting features from segmented cell nuclei structures may provide useful information to detect cancer tissue. A framework for automatic classification of regions of interest (ROI) containing either benign or cancerous colon tissue extracted from whole slide H&E stained images using cell nuclei features was proposed. A total of 1,596 ROI's were extracted from 87 whole slide H&E stains (44 benign and 43 cancer). A cell nuclei segmentation algorithm consisting of color deconvolution, k-means clustering, local adaptive thresholding, and cell separation was performed within the ROI's to extract cell nuclei features. From the segmented cell nuclei structures a total of 750 texture and intensity-based features were extracted for classification of the ROI's. The nine most discriminative cell nuclei features were used in a random forest classifier to determine if the ROI's contained benign or cancer tissue. The ROI classification obtained an area under the curve (AUC) of 0.96, sensitivity of 0.88, specificity of 0.92, and accuracy of 0.91 using an optimized threshold. The developed framework showed promising results in using cell nuclei features to classify ROIs into containing benign or cancer tissue in H&E stained tissue samples. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  19. Detection of braking intention in diverse situations during simulated driving based on EEG feature combination

    Science.gov (United States)

    Kim, Il-Hwa; Kim, Jeong-Woo; Haufe, Stefan; Lee, Seong-Whan

    2015-02-01

    Objective. We developed a simulated driving environment for studying neural correlates of emergency braking in diversified driving situations. We further investigated to what extent these neural correlates can be used to detect a participant's braking intention prior to the behavioral response. Approach. We measured electroencephalographic (EEG) and electromyographic signals during simulated driving. Fifteen participants drove a virtual vehicle and were exposed to several kinds of traffic situations in a simulator system, while EEG signals were measured. After that, we extracted characteristic features to categorize whether the driver intended to brake or not. Main results. Our system shows excellent detection performance in a broad range of possible emergency situations. In particular, we were able to distinguish three different kinds of emergency situations (sudden stop of a preceding vehicle, sudden cutting-in of a vehicle from the side and unexpected appearance of a pedestrian) from non-emergency (soft) braking situations, as well as from situations in which no braking was required, but the sensory stimulation was similar to stimulations inducing an emergency situation (e.g., the sudden stop of a vehicle on a neighboring lane). Significance. We proposed a novel feature combination comprising movement-related potentials such as the readiness potential, event-related desynchronization features besides the event-related potentials (ERP) features used in a previous study. The performance of predicting braking intention based on our proposed feature combination was superior compared to using only ERP features. Our study suggests that emergency situations are characterized by specific neural patterns of sensory perception and processing, as well as motor preparation and execution, which can be utilized by neurotechnology based braking assistance systems.

  20. Effective dysphonia detection using feature dimension reduction and kernel density estimation for patients with Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Shanshan Yang

    Full Text Available Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD, and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS and kernel principal component analysis (KPCA methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP decision rule and support vector machine (SVM with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified.

  1. A robust indicator based on singular value decomposition for flaw feature detection from noisy ultrasonic signals

    Science.gov (United States)

    Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang

    2018-05-01

    Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.

  2. Computing Adaptive Feature Weights with PSO to Improve Android Malware Detection

    Directory of Open Access Journals (Sweden)

    Yanping Xu

    2017-01-01

    Full Text Available Android malware detection is a complex and crucial issue. In this paper, we propose a malware detection model using a support vector machine (SVM method based on feature weights that are computed by information gain (IG and particle swarm optimization (PSO algorithms. The IG weights are evaluated based on the relevance between features and class labels, and the PSO weights are adaptively calculated to result in the best fitness (the performance of the SVM classification model. Moreover, to overcome the defects of basic PSO, we propose a new adaptive inertia weight method called fitness-based and chaotic adaptive inertia weight-PSO (FCAIW-PSO that improves on basic PSO and is based on the fitness and a chaotic term. The goal is to assign suitable weights to the features to ensure the best Android malware detection performance. The results of experiments indicate that the IG weights and PSO weights both improve the performance of SVM and that the performance of the PSO weights is better than that of the IG weights.

  3. Spike detection, characterization, and discrimination using feature analysis software written in LabVIEW.

    Science.gov (United States)

    Stewart, C M; Newlands, S D; Perachio, A A

    2004-12-01

    Rapid and accurate discrimination of single units from extracellular recordings is a fundamental process for the analysis and interpretation of electrophysiological recordings. We present an algorithm that performs detection, characterization, discrimination, and analysis of action potentials from extracellular recording sessions. The program was entirely written in LabVIEW (National Instruments), and requires no external hardware devices or a priori information about action potential shapes. Waveform events are detected by scanning the digital record for voltages that exceed a user-adjustable trigger. Detected events are characterized to determine nine different time and voltage levels for each event. Various algebraic combinations of these waveform features are used as axis choices for 2-D Cartesian plots of events. The user selects axis choices that generate distinct clusters. Multiple clusters may be defined as action potentials by manually generating boundaries of arbitrary shape. Events defined as action potentials are validated by visual inspection of overlain waveforms. Stimulus-response relationships may be identified by selecting any recorded channel for comparison to continuous and average cycle histograms of binned unit data. The algorithm includes novel aspects of feature analysis and acquisition, including higher acquisition rates for electrophysiological data compared to other channels. The program confirms that electrophysiological data may be discriminated with high-speed and efficiency using algebraic combinations of waveform features derived from high-speed digital records.

  4. Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening.

    Science.gov (United States)

    Seoud, Lama; Hurtut, Thomas; Chelbi, Jihed; Cheriet, Farida; Langlois, J M Pierre

    2016-04-01

    The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.

  5. Regions of micro-calcifications clusters detection based on new features from imbalance data in mammograms

    Science.gov (United States)

    Wang, Keju; Dong, Min; Yang, Zhen; Guo, Yanan; Ma, Yide

    2017-02-01

    Breast cancer is the most common cancer among women. Micro-calcification cluster on X-ray mammogram is one of the most important abnormalities, and it is effective for early cancer detection. Surrounding Region Dependence Method (SRDM), a statistical texture analysis method is applied for detecting Regions of Interest (ROIs) containing microcalcifications. Inspired by the SRDM, we present a method that extract gray and other features which are effective to predict the positive and negative regions of micro-calcifications clusters in mammogram. By constructing a set of artificial images only containing micro-calcifications, we locate the suspicious pixels of calcifications of a SRDM matrix in original image map. Features are extracted based on these pixels for imbalance date and then the repeated random subsampling method and Random Forest (RF) classifier are used for classification. True Positive (TP) rate and False Positive (FP) can reflect how the result will be. The TP rate is 90% and FP rate is 88.8% when the threshold q is 10. We draw the Receiver Operating Characteristic (ROC) curve and the Area Under the ROC Curve (AUC) value reaches 0.9224. The experiment indicates that our method is effective. A novel regions of micro-calcifications clusters detection method is developed, which is based on new features for imbalance data in mammography, and it can be considered to help improving the accuracy of computer aided diagnosis breast cancer.

  6. Automatic Railway Traffic Object Detection System Using Feature Fusion Refine Neural Network under Shunting Mode

    Directory of Open Access Journals (Sweden)

    Tao Ye

    2018-06-01

    Full Text Available Many accidents happen under shunting mode when the speed of a train is below 45 km/h. In this mode, train attendants observe the railway condition ahead using the traditional manual method and tell the observation results to the driver in order to avoid danger. To address this problem, an automatic object detection system based on convolutional neural network (CNN is proposed to detect objects ahead in shunting mode, which is called Feature Fusion Refine neural network (FR-Net. It consists of three connected modules, i.e., the depthwise-pointwise convolution, the coarse detection module, and the object detection module. Depth-wise-pointwise convolutions are used to improve the detection in real time. The coarse detection module coarsely refine the locations and sizes of prior anchors to provide better initialization for the subsequent module and also reduces search space for the classification, whereas the object detection module aims to regress accurate object locations and predict the class labels for the prior anchors. The experimental results on the railway traffic dataset show that FR-Net achieves 0.8953 mAP with 72.3 FPS performance on a machine with a GeForce GTX1080Ti with the input size of 320 × 320 pixels. The results imply that FR-Net takes a good tradeoff both on effectiveness and real time performance. The proposed method can meet the needs of practical application in shunting mode.

  7. Obscenity detection using haar-like features and Gentle Adaboost classifier.

    Science.gov (United States)

    Mustafa, Rashed; Min, Yang; Zhu, Dingju

    2014-01-01

    Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB) haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier.

  8. Obscenity Detection Using Haar-Like Features and Gentle Adaboost Classifier

    Directory of Open Access Journals (Sweden)

    Rashed Mustafa

    2014-01-01

    Full Text Available Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier.

  9. A simple optimization can improve the performance of single feature polymorphism detection by Affymetrix expression arrays

    Directory of Open Access Journals (Sweden)

    Fujisawa Hironori

    2010-05-01

    Full Text Available Abstract Background High-density oligonucleotide arrays are effective tools for genotyping numerous loci simultaneously. In small genome species (genome size: Results We compared the single feature polymorphism (SFP detection performance of whole-genome and transcript hybridizations using the Affymetrix GeneChip® Rice Genome Array, using the rice cultivars with full genome sequence, japonica cultivar Nipponbare and indica cultivar 93-11. Both genomes were surveyed for all probe target sequences. Only completely matched 25-mer single copy probes of the Nipponbare genome were extracted, and SFPs between them and 93-11 sequences were predicted. We investigated optimum conditions for SFP detection in both whole genome and transcript hybridization using differences between perfect match and mismatch probe intensities of non-polymorphic targets, assuming that these differences are representative of those between mismatch and perfect targets. Several statistical methods of SFP detection by whole-genome hybridization were compared under the optimized conditions. Causes of false positives and negatives in SFP detection in both types of hybridization were investigated. Conclusions The optimizations allowed a more than 20% increase in true SFP detection in whole-genome hybridization and a large improvement of SFP detection performance in transcript hybridization. Significance analysis of the microarray for log-transformed raw intensities of PM probes gave the best performance in whole genome hybridization, and 22,936 true SFPs were detected with 23.58% false positives by whole genome hybridization. For transcript hybridization, stable SFP detection was achieved for highly expressed genes, and about 3,500 SFPs were detected at a high sensitivity (> 50% in both shoot and young panicle transcripts. High SFP detection performances of both genome and transcript hybridizations indicated that microarrays of a complex genome (e.g., of Oryza sativa can be

  10. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  11. Comparison of Different Features and Classifiers for Driver Fatigue Detection Based on a Single EEG Channel

    Directory of Open Access Journals (Sweden)

    Jianfeng Hu

    2017-01-01

    Full Text Available Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG channel. Four types of entropies measures, sample entropy (SE, fuzzy entropy (FE, approximate entropy (AE, and spectral entropy (PE, were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF. The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different.

  12. Driver Fatigue Detection System Using Electroencephalography Signals Based on Combined Entropy Features

    Directory of Open Access Journals (Sweden)

    Zhendong Mu

    2017-02-01

    Full Text Available Driver fatigue has become one of the major causes of traffic accidents, and is a complicated physiological process. However, there is no effective method to detect driving fatigue. Electroencephalography (EEG signals are complex, unstable, and non-linear; non-linear analysis methods, such as entropy, maybe more appropriate. This study evaluates a combined entropy-based processing method of EEG data to detect driver fatigue. In this paper, 12 subjects were selected to take part in an experiment, obeying driving training in a virtual environment under the instruction of the operator. Four types of enthrones (spectrum entropy, approximate entropy, sample entropy and fuzzy entropy were used to extract features for the purpose of driver fatigue detection. Electrode selection process and a support vector machine (SVM classification algorithm were also proposed. The average recognition accuracy was 98.75%. Retrospective analysis of the EEG showed that the extracted features from electrodes T5, TP7, TP8 and FP1 may yield better performance. SVM classification algorithm using radial basis function as kernel function obtained better results. A combined entropy-based method demonstrates good classification performance for studying driver fatigue detection.

  13. Minimal Data Fidelity for Successful detection of Stellar Features or Companions

    Science.gov (United States)

    Agarwal, S.; Wettlaufer, J. S.

    2017-12-01

    Technological advances in instrumentation have led to an exponential increase in exoplanet detection and scrutiny of stellar features such as spots and faculae. While the spots and faculae enable us to understand the stellar dynamics, exoplanets provide us with a glimpse into stellar evolution. While a clean set of data is always desirable, noise is ubiquitous in the data such as telluric, instrumental, or photonic, but combining this with increased spectrographic resolution compounds technological challenges. To account for these noise sources and resolution issues, using a temporal multifractal framework, we study data from the SOAP 2.0 tool, which simulates a stellar spectrum in the presence of a spot, a facula or a planet. Given these clean simulations, we vary the resolution as well as the signal-to- noise (S/N) ratio to obtain a lower limit on the resolution and S/N required to robustly detect features. We show that a spot and facula with a 1% coverage of the stellar disk can be robustly detected for a S/N (per resolution element) of 20 and 35 respectively for any resolution above 20,000, while a planet with an RV of 10ms-1 can be detected for a S/N (per resolution element) of 350. Rather than viewing noise as an impediment, this approach uses noise as a source of information.

  14. Early detection of breast cancer mass lesions by mammogram segmentation images based on texture features

    International Nuclear Information System (INIS)

    Mahmood, F.H.

    2012-01-01

    Mammography is at present one of the available method for early detection of masses or abnormalities which is related to breast cancer.The calcifications. The challenge lies in early and accurate detection to overcome the development of breast cancer that affects more and more women throughout the world. Breast cancer is diagnosed at advanced stages with the help of the digital mammogram images. Masses appear in a mammogram as fine, granular clusters, which are often difficult to identify in a raw mammogram. The incidence of breast cancer in women has increased significantly in recent years. This paper proposes a computer aided diagnostic system for the extraction of features like mass lesions in mammograms for early detection of breast cancer. The proposed technique is based on a four-step procedure: (a) the preprocessing of the image is done, (b) regions of interest (ROI) specification, (c) supervised segmentation method includes two to stages performed using the minimum distance (M D) criterion, and (d) feature extraction based on Gray level Co-occurrence matrices GLC M for the identification of mass lesions. The method suggested for the detection of mass lesions from mammogram image segmentation and analysis was tested over several images taken from A L-llwiya Hospital in Baghdad, Iraq.The proposed technique shows better results.

  15. FEATURE RECOGNITION BERBASIS CORNER DETECTION DENGAN METODE FAST, SURF DAN FLANN TREE UNTUK IDENTIFIKASI LOGO PADA AUGMENTED REALITY MOBILE SYSTEM

    Directory of Open Access Journals (Sweden)

    Rastri Prathivi

    2014-01-01

    Full Text Available Logo is a graphical symbol that is the identity of an organization, institution, or company. Logo is generally used to introduce to the public the existence of an organization, institution, or company. Through the existence of an agency logo can be seen by the public. Feature recognition is one of the processes that exist within an augmented reality system. One of uses augmented reality is able to recognize the identity of the logo through a camera.The first step to make a process of feature recognition is through the corner detection. Incorporation of several method such as FAST, SURF, and FLANN TREE for the feature detection process based corner detection feature matching up process, will have the better ability to detect the presence of a logo. Additionally when running the feature extraction process there are several issues that arise as scale invariant feature and rotation invariant feature. In this study the research object in the form of logo to the priority to make the process of feature recognition. FAST, SURF, and FLANN TREE method will detection logo with scale invariant feature and rotation invariant feature conditions. Obtained from this study will demonstration the accuracy from FAST, SURF, and FLANN TREE methods to solve the scale invariant and rotation invariant feature problems.

  16. Rip current evidence by hydrodynamic simulations, bathymetric surveys and UAV observation

    Directory of Open Access Journals (Sweden)

    G. Benassai

    2017-09-01

    Full Text Available The prediction of the formation, spacing and location of rip currents is a scientific challenge that can be achieved by means of different complementary methods. In this paper the analysis of numerical and experimental data, including RPAS (remotely piloted aircraft systems observations, allowed us to detect the presence of rip currents and rip channels at the mouth of Sele River, in the Gulf of Salerno, southern Italy. The dataset used to analyze these phenomena consisted of two different bathymetric surveys, a detailed sediment analysis and a set of high-resolution wave numerical simulations, completed with Google EarthTM images and RPAS observations. The grain size trend analysis and the numerical simulations allowed us to identify the rip current occurrence, forced by topographically constrained channels incised on the seabed, which were compared with observations.

  17. Automated Feature and Event Detection with SDO AIA and HMI Data

    Science.gov (United States)

    Davey, Alisdair; Martens, P. C. H.; Attrill, G. D. R.; Engell, A.; Farid, S.; Grigis, P. C.; Kasper, J.; Korreck, K.; Saar, S. H.; Su, Y.; Testa, P.; Wills-Davey, M.; Savcheva, A.; Bernasconi, P. N.; Raouafi, N.-E.; Delouille, V. A.; Hochedez, J. F..; Cirtain, J. W.; Deforest, C. E.; Angryk, R. A.; de Moortel, I.; Wiegelmann, T.; Georgouli, M. K.; McAteer, R. T. J.; Hurlburt, N.; Timmons, R.

    The Solar Dynamics Observatory (SDO) represents a new frontier in quantity and quality of solar data. At about 1.5 TB/day, the data will not be easily digestible by solar physicists using the same methods that have been employed for images from previous missions. In order for solar scientists to use the SDO data effectively they need meta-data that will allow them to identify and retrieve data sets that address their particular science questions. We are building a comprehensive computer vision pipeline for SDO, abstracting complete metadata on many of the features and events detectable on the Sun without human intervention. Our project unites more than a dozen individual, existing codes into a systematic tool that can be used by the entire solar community. The feature finding codes will run as part of the SDO Event Detection System (EDS) at the Joint Science Operations Center (JSOC; joint between Stanford and LMSAL). The metadata produced will be stored in the Heliophysics Event Knowledgebase (HEK), which will be accessible on-line for the rest of the world directly or via the Virtual Solar Observatory (VSO) . Solar scientists will be able to use the HEK to select event and feature data to download for science studies.

  18. A new feature detection mechanism and its application in secured ECG transmission with noise masking.

    Science.gov (United States)

    Sufi, Fahim; Khalil, Ibrahim

    2009-04-01

    With cardiovascular disease as the number one killer of modern era, Electrocardiogram (ECG) is collected, stored and transmitted in greater frequency than ever before. However, in reality, ECG is rarely transmitted and stored in a secured manner. Recent research shows that eavesdropper can reveal the identity and cardiovascular condition from an intercepted ECG. Therefore, ECG data must be anonymized before transmission over the network and also stored as such in medical repositories. To achieve this, first of all, this paper presents a new ECG feature detection mechanism, which was compared against existing cross correlation (CC) based template matching algorithms. Two types of CC methods were used for comparison. Compared to the CC based approaches, which had 40% and 53% misclassification rates, the proposed detection algorithm did not perform any single misclassification. Secondly, a new ECG obfuscation method was designed and implemented on 15 subjects using added noises corresponding to each of the ECG features. This obfuscated ECG can be freely distributed over the internet without the necessity of encryption, since the original features needed to identify personal information of the patient remain concealed. Only authorized personnel possessing a secret key will be able to reconstruct the original ECG from the obfuscated ECG. Distribution of the would appear as regular ECG without encryption. Therefore, traditional decryption techniques including powerful brute force attack are useless against this obfuscation.

  19. LMD Based Features for the Automatic Seizure Detection of EEG Signals Using SVM.

    Science.gov (United States)

    Zhang, Tao; Chen, Wanzhong

    2017-08-01

    Achieving the goal of detecting seizure activity automatically using electroencephalogram (EEG) signals is of great importance and significance for the treatment of epileptic seizures. To realize this aim, a newly-developed time-frequency analytical algorithm, namely local mean decomposition (LMD), is employed in the presented study. LMD is able to decompose an arbitrary signal into a series of product functions (PFs). Primarily, the raw EEG signal is decomposed into several PFs, and then the temporal statistical and non-linear features of the first five PFs are calculated. The features of each PF are fed into five classifiers, including back propagation neural network (BPNN), K-nearest neighbor (KNN), linear discriminant analysis (LDA), un-optimized support vector machine (SVM) and SVM optimized by genetic algorithm (GA-SVM), for five classification cases, respectively. Confluent features of all PFs and raw EEG are further passed into the high-performance GA-SVM for the same classification tasks. Experimental results on the international public Bonn epilepsy EEG dataset show that the average classification accuracy of the presented approach are equal to or higher than 98.10% in all the five cases, and this indicates the effectiveness of the proposed approach for automated seizure detection.

  20. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    Science.gov (United States)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  1. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    Science.gov (United States)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  2. Face detection on distorted images using perceptual quality-aware features

    Science.gov (United States)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  3. Root Exploit Detection and Features Optimization: Mobile Device and Blockchain Based Medical Data Management.

    Science.gov (United States)

    Firdaus, Ahmad; Anuar, Nor Badrul; Razak, Mohd Faizal Ab; Hashem, Ibrahim Abaker Targio; Bachok, Syafiq; Sangaiah, Arun Kumar

    2018-05-04

    The increasing demand for Android mobile devices and blockchain has motivated malware creators to develop mobile malware to compromise the blockchain. Although the blockchain is secure, attackers have managed to gain access into the blockchain as legal users, thereby comprising important and crucial information. Examples of mobile malware include root exploit, botnets, and Trojans and root exploit is one of the most dangerous malware. It compromises the operating system kernel in order to gain root privileges which are then used by attackers to bypass the security mechanisms, to gain complete control of the operating system, to install other possible types of malware to the devices, and finally, to steal victims' private keys linked to the blockchain. For the purpose of maximizing the security of the blockchain-based medical data management (BMDM), it is crucial to investigate the novel features and approaches contained in root exploit malware. This study proposes to use the bio-inspired method of practical swarm optimization (PSO) which automatically select the exclusive features that contain the novel android debug bridge (ADB). This study also adopts boosting (adaboost, realadaboost, logitboost, and multiboost) to enhance the machine learning prediction that detects unknown root exploit, and scrutinized three categories of features including (1) system command, (2) directory path and (3) code-based. The evaluation gathered from this study suggests a marked accuracy value of 93% with Logitboost in the simulation. Logitboost also helped to predicted all the root exploit samples in our developed system, the root exploit detection system (RODS).

  4. Combining Cluster Analysis and Small Unmanned Aerial Systems (sUAS) for Accurate and Low-cost Bathymetric Surveying

    Science.gov (United States)

    Maples, B. L.; Alvarez, L. V.; Moreno, H. A.; Chilson, P. B.; Segales, A.

    2017-12-01

    Given that classical in-situ direct surveying for geomorphological subsurface information in rivers is time-consuming, labor-intensive, costly, and often involves high-risk activities, it is obvious that non-intrusive technologies, like UAS-based, LIDAR-based remote sensing, have a promising potential and benefits in terms of efficient and accurate measurement of channel topography over large areas within a short time; therefore, a tremendous amount of attention has been paid to the development of these techniques. Over the past two decades, efforts have been undertaken to develop a specialized technique that can penetrate the water body and detect the channel bed to derive river and coastal bathymetry. In this research, we develop a low-cost effective technique for water body bathymetry. With the use of a sUAS and a light-weight sonar, the bathymetry and volume of a small reservoir have been surveyed. The sUAS surveying approach is conducted under low altitudes (2 meters from the water) using the sUAS to tow a small boat with the sonar attached. A cluster analysis is conducted to optimize the sUAS data collection and minimize the standard deviation created by under-sampling in areas of highly variable bathymetry, so measurements are densified in regions featured by steep slopes and drastic changes in the reservoir bed. This technique provides flexibility, efficiency, and free-risk to humans while obtaining high-quality information. The irregularly-spaced bathymetric survey is then interpolated using unstructured Triangular Irregular Network (TIN)-based maps to avoid re-gridding or re-sampling issues.

  5. Topographic attributes as a guide for automated detection or highlighting of geological features

    Science.gov (United States)

    Viseur, Sophie; Le Men, Thibaud; Guglielmi, Yves

    2015-04-01

    Photogrammetry or LIDAR technology combined with photography allow geoscientists to obtain 3D high-resolution numerical representations of outcrops, generally termed as Digital Outcrop Models (DOM). For over a decade, these 3D numerical outcrops serve as support for precise and accurate interpretations of geological features such as fracture traces or plans, strata, facies mapping, etc. These interpretations have the benefit to be directly georeferenced and embedded into the 3D space. They are then easily integrated into GIS or geomodeler softwares for modelling in 3D the subsurface geological structures. However, numerical outcrops generally represent huge data sets that are heavy to manipulate and hence to interpret. This may be particularly tedious as soon as several scales of geological features must be investigated or as geological features are very dense and imbricated. Automated tools for interpreting geological features from DOMs would be then a significant help to process these kinds of data. Such technologies are commonly used for interpreting seismic or medical data. However, it may be noticed that even if many efforts have been devoted to easily and accurately acquire 3D topographic point clouds and photos and to visualize accurate 3D textured DOMs, few attentions have been paid to the development of algorithms for automated detection of the geological structures from DOMs. The automatic detection of objects on numerical data generally assumes that signals or attributes computed from this data allows the recognition of the targeted object boundaries. The first step consists then in defining attributes that highlight the objects or their boundaries. For DOM interpretations, some authors proposed to use differential operators computed on the surface such as normal or curvatures. These methods generally extract polylines corresponding to fracture traces or bed limits. Other approaches rely on the PCA technology to segregate different topographic plans

  6. Innovative High-Accuracy Lidar Bathymetric Technique for the Frequent Measurement of River Systems

    Science.gov (United States)

    Gisler, A.; Crowley, G.; Thayer, J. P.; Thompson, G. S.; Barton-Grimley, R. A.

    2015-12-01

    Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for understanding how rivers evolve over many timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.

  7. Spectral feature characterization methods for blood stain detection in crime scene backgrounds

    Science.gov (United States)

    Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.

    2016-05-01

    Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.

  8. Archaeological Feature Detection from Archive Aerial Photography with a Sfm-Mvs and Image Enhancement Pipeline

    Science.gov (United States)

    Peppa, M. V.; Mills, J. P.; Fieber, K. D.; Haynes, I.; Turner, S.; Turner, A.; Douglas, M.; Bryan, P. G.

    2018-05-01

    Understanding and protecting cultural heritage involves the detection and long-term documentation of archaeological remains alongside the spatio-temporal analysis of their landscape evolution. Archive aerial photography can illuminate traces of ancient features which typically appear with different brightness values from their surrounding environment, but are not always well defined. This research investigates the implementation of the Structure-from-Motion - Multi-View Stereo image matching approach with an image enhancement algorithm to derive three epochs of orthomosaics and digital surface models from visible and near infrared historic aerial photography. The enhancement algorithm uses decorrelation stretching to improve the contrast of the orthomosaics so as archaeological features are better detected. Results include 2D / 3D locations of detected archaeological traces stored into a geodatabase for further archaeological interpretation and correlation with benchmark observations. The study also discusses the merits and difficulties of the process involved. This research is based on a European-wide project, entitled "Cultural Heritage Through Time", and the case study research was carried out as a component of the project in the UK.

  9. Automatic detection and classification of breast tumors in ultrasonic images using texture and morphological features.

    Science.gov (United States)

    Su, Yanni; Wang, Yuanyuan; Jiao, Jing; Guo, Yi

    2011-01-01

    Due to severe presence of speckle noise, poor image contrast and irregular lesion shape, it is challenging to build a fully automatic detection and classification system for breast ultrasonic images. In this paper, a novel and effective computer-aided method including generation of a region of interest (ROI), segmentation and classification of breast tumor is proposed without any manual intervention. By incorporating local features of texture and position, a ROI is firstly detected using a self-organizing map neural network. Then a modified Normalized Cut approach considering the weighted neighborhood gray values is proposed to partition the ROI into clusters and get the initial boundary. In addition, a regional-fitting active contour model is used to adjust the few inaccurate initial boundaries for the final segmentation. Finally, three textures and five morphologic features are extracted from each breast tumor; whereby a highly efficient Affinity Propagation clustering is used to fulfill the malignancy and benign classification for an existing database without any training process. The proposed system is validated by 132 cases (67 benignancies and 65 malignancies) with its performance compared to traditional methods such as level set segmentation, artificial neural network classifiers, and so forth. Experiment results show that the proposed system, which needs no training procedure or manual interference, performs best in detection and classification of ultrasonic breast tumors, while having the lowest computation complexity.

  10. LAND COVER CHANGE DETECTION BASED ON GENETICALLY FEATURE AELECTION AND IMAGE ALGEBRA USING HYPERION HYPERSPECTRAL IMAGERY

    Directory of Open Access Journals (Sweden)

    S. T. Seydi

    2015-12-01

    Full Text Available The Earth has always been under the influence of population growth and human activities. This process causes the changes in land use. Thus, for optimal management of the use of resources, it is necessary to be aware of these changes. Satellite remote sensing has several advantages for monitoring land use/cover resources, especially for large geographic areas. Change detection and attribution of cultivation area over time present additional challenges for correctly analyzing remote sensing imagery. In this regards, for better identifying change in multi temporal images we use hyperspectral images. Hyperspectral images due to high spectral resolution created special placed in many of field. Nevertheless, selecting suitable and adequate features/bands from this data is crucial for any analysis and especially for the change detection algorithms. This research aims to automatically feature selection for detect land use changes are introduced. In this study, the optimal band images using hyperspectral sensor using Hyperion hyperspectral images by using genetic algorithms and Ratio bands, we select the optimal band. In addition, the results reveal the superiority of the implemented method to extract change map with overall accuracy by a margin of nearly 79% using multi temporal hyperspectral imagery.

  11. Research on Copy-Move Image Forgery Detection Using Features of Discrete Polar Complex Exponential Transform

    Science.gov (United States)

    Gan, Yanfen; Zhong, Junliu

    2015-12-01

    With the aid of sophisticated photo-editing software, such as Photoshop, copy-move image forgery operation has been widely applied and has become a major concern in the field of information security in the modern society. A lot of work on detecting this kind of forgery has gained great achievements, but the detection results of geometrical transformations of copy-move regions are not so satisfactory. In this paper, a new method based on the Polar Complex Exponential Transform is proposed. This method addresses issues in image geometric moment, focusing on constructing rotation invariant moment and extracting features of the rotation invariant moment. In order to reduce rounding errors of the transform from the Polar coordinate system to the Cartesian coordinate system, a new transformation method is presented and discussed in detail at the same time. The new method constructs a 9 × 9 shrunk template to transform the Cartesian coordinate system back to the Polar coordinate system. It can reduce transform errors to a much greater degree. Forgery detection, such as copy-move image forgery detection, is a difficult procedure, but experiments prove our method is a great improvement in detecting and identifying forgery images affected by the rotated transform.

  12. Bright Retinal Lesions Detection using Colour Fundus Images Containing Reflective Features

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Karnowski, Thomas Paul [ORNL; Chaum, Edward [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK)

    2009-01-01

    In the last years the research community has developed many techniques to detect and diagnose diabetic retinopathy with retinal fundus images. This is a necessary step for the implementation of a large scale screening effort in rural areas where ophthalmologists are not available. In the United States of America, the incidence of diabetes is worryingly increasing among the young population. Retina fundus images of patients younger than 20 years old present a high amount of reflection due to the Nerve Fibre Layer (NFL), the younger the patient the more these reflections are visible. To our knowledge we are not aware of algorithms able to explicitly deal with this type of reflection artefact. This paper presents a technique to detect bright lesions also in patients with a high degree of reflective NFL. First, the candidate bright lesions are detected using image equalization and relatively simple histogram analysis. Then, a classifier is trained using texture descriptor (Multi-scale Local Binary Patterns) and other features in order to remove the false positives in the lesion detection. Finally, the area of the lesions is used to diagnose diabetic retinopathy. Our database consists of 33 images from a telemedicine network currently developed. When determining moderate to high diabetic retinopathy using the bright lesions detected the algorithm achieves a sensitivity of 100% at a specificity of 100% using hold-one-out testing.

  13. Acoustic Longitudinal Field NIF Optic Feature Detection Map Using Time-Reversal & MUSIC

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S K

    2006-02-09

    We developed an ultrasonic longitudinal field time-reversal and MUltiple SIgnal Classification (MUSIC) based detection algorithm for identifying and mapping flaws in fused silica NIF optics. The algorithm requires a fully multistatic data set, that is one with multiple, independently operated, spatially diverse transducers, each transmitter of which, in succession, launches a pulse into the optic and the scattered signal measured and recorded at every receiver. We have successfully localized engineered ''defects'' larger than 1 mm in an optic. We confirmed detection and localization of 3 mm and 5 mm features in experimental data, and a 0.5 mm in simulated data with sufficiently high signal-to-noise ratio. We present the theory, experimental results, and simulated results.

  14. A Widely Applicable Silver Sol for TLC Detection with Rich and Stable SERS Features

    Science.gov (United States)

    Zhu, Qingxia; Li, Hao; Lu, Feng; Chai, Yifeng; Yuan, Yongfang

    2016-04-01

    Thin-layer chromatography (TLC) coupled with surface-enhanced Raman spectroscopy (SERS) has gained tremendous popularity in the study of various complex systems. However, the detection of hydrophobic analytes is difficult, and the specificity still needs to be improved. In this study, a SERS-active non-aqueous silver sol which could activate the analytes to produce rich and stable spectral features was rapidly synthesized. Then, the optimized silver nanoparticles (AgNPs)-DMF sol was employed for TLC-SERS detection of hydrophobic (and also hydrophilic) analytes. SERS performance of this sol was superior to that of traditional Lee-Meisel AgNPs due to its high specificity, acceptable stability, and wide applicability. The non-aqueous AgNPs would be suitable for the TLC-SERS method, which shows great promise for applications in food safety assurance, environmental monitoring, medical diagnoses, and many other fields.

  15. Airborne electromagnetic detection of shallow seafloor topographic features, including resolution of multiple sub-parallel seafloor ridges

    Science.gov (United States)

    Vrbancich, Julian; Boyd, Graham

    2014-05-01

    The HoistEM helicopter time-domain electromagnetic (TEM) system was flown over waters in Backstairs Passage, South Australia, in 2003 to test the bathymetric accuracy and hence the ability to resolve seafloor structure in shallow and deeper waters (extending to ~40 m depth) that contain interesting seafloor topography. The topography that forms a rock peak (South Page) in the form of a mini-seamount that barely rises above the water surface was accurately delineated along its ridge from the start of its base (where the seafloor is relatively flat) in ~30 m water depth to its peak at the water surface, after an empirical correction was applied to the data to account for imperfect system calibration, consistent with earlier studies using the same HoistEM system. A much smaller submerged feature (Threshold Bank) of ~9 m peak height located in waters of 35 to 40 m depth was also accurately delineated. These observations when checked against known water depths in these two regions showed that the airborne TEM system, following empirical data correction, was effectively operating correctly. The third and most important component of the survey was flown over the Yatala Shoals region that includes a series of sub-parallel seafloor ridges (resembling large sandwaves rising up to ~20 m from the seafloor) that branch out and gradually decrease in height as the ridges spread out across the seafloor. These sub-parallel ridges provide an interesting topography because the interpreted water depths obtained from 1D inversion of TEM data highlight the limitations of the EM footprint size in resolving both the separation between the ridges (which vary up to ~300 m) and the height of individual ridges (which vary up to ~20 m), and possibly also the limitations of assuming a 1D model in areas where the topography is quasi-2D/3D.

  16. Bathymetric Signatures of Oceanic Detachment Faulting and Potential Ultramafic Lithologies at Outcrop or in the Shallow Subseafloor

    Science.gov (United States)

    Cann, J. R.; Smith, D. K.; Escartin, J.; Schouten, H.

    2008-12-01

    For ten years, domal bathymetric features capped by corrugated and striated surfaces have been recognized as exposures of oceanic detachment faults, and hence potentially as exposures of plutonic rocks from lower crust or upper mantle. Associated with these domes are other bathymetric features that indicate the presence of detachment faulting. Taken together these bathymetric signatures allow the mapping of large areas of detachment faulting at slow and intermediate spreading ridges, both at the axis and away from it. These features are: 1. Smooth elevated domes corrugated parallel to the spreading direction, typically 10-30 km wide parallel to the axis; 2. Linear ridges with outward-facing slopes steeper than 20°, running parallel to the spreading axis, typically 10-30 km long; 3. Deep basins with steep sides and relatively flat floors, typically 10-20 km long parallel to the spreading axis and 5-10 km wide. This characteristic bathymetric association arises from the rolling over of long-lived detachment faults as they spread away from the axis. The faults dip steeply close to their origin at a few kilometers depth near the spreading axis, and rotate to shallow dips as they continue to evolve, with associated footwall flexure and rotation of rider blocks carried on the fault surface. The outward slopes of the linear ridges can be shown to be rotated volcanic seafloor transported from the median valley floor. The basins may be formed by the footwall flexure, and may be exposures of the detachment surface. Critical in this analysis is that the corrugated domes are not the only sites of detachment faulting, but are the places where higher parts of much more extensive detachment faults happen to be exposed. The fault plane rises and falls along axis, and in some places is covered by rider blocks, while in others it is exposed at the sea floor. We use this association to search for evidence for detachment faulting in existing surveys, identifying for example an area

  17. Prediction of topographic and bathymetric measurement performance of airborne low-SNR lidar systems

    Science.gov (United States)

    Cossio, Tristan

    Low signal-to-noise ratio (LSNR) lidar (light detection and ranging) is an alternative paradigm to traditional lidar based on the detection of return signals at the single photoelectron level. The objective of this work was to predict low altitude (600 m) LSNR lidar system performance with regards to elevation measurement and target detection capability in topographic (dry land) and bathymetric (shallow water) scenarios. A modular numerical sensor model has been developed to provide data for further analysis due to the dearth of operational low altitude LSNR lidar systems. This simulator tool is described in detail, with consideration given to atmospheric effects, surface conditions, and the effects of laser phenomenology. Measurement performance analysis of the simulated topographic data showed results comparable to commercially available lidar systems, with a standard deviation of less than 12 cm for calculated elevation values. Bathymetric results, although dependent largely on water turbidity, were indicative of meter-scale horizontal data spacing for sea depths less than 5 m. The high prevalence of noise in LSNR lidar data introduces significant difficulties in data analysis. Novel algorithms to reduce noise are described, with particular focus on their integration into an end-to-end target detection classifier for both dry and submerged targets (cube blocks, 0.5 m to 1.0 m on a side). The key characteristic exploited to discriminate signal and noise is the temporal coherence of signal events versus the random distribution of noise events. Target detection performance over dry earth was observed to be robust, reliably detecting over 90% of targets with a minimal false alarm rate. Comparable results were observed in waters of high clarity, where the investigated system was generally able to detect more than 70% of targets to a depth of 5 m. The results of the study show that CATS, the University of Florida's LSNR lidar prototype, is capable of high fidelity

  18. [Spectral features analysis of Pinus massoniana with pest of Dendrolimus punctatus Walker and levels detection].

    Science.gov (United States)

    Xu, Zhang-Hua; Liu, Jian; Yu, Kun-Yong; Gong, Cong-Hong; Xie, Wan-Jun; Tang, Meng-Ya; Lai, Ri-Wen; Li, Zeng-Lu

    2013-02-01

    Taking 51 field measured hyperspectral data with different pest levels in Yanping, Fujian Province as objects, the spectral reflectance and first derivative features of 4 levels of healthy, mild, moderate and severe insect pest were analyzed. On the basis of 7 detecting parameters construction, the pest level detecting models were built. The results showed that (1) the spectral reflectance of Pinus massoniana with pests were significantly lower than that of healthy state, and the higher the pest level, the lower the reflectance; (2) with the increase in pest level, the spectral reflectance curves' "green peak" and "red valley" of Pinus massoniana gradually disappeared, and the red edge was leveleds (3) the pest led to spectral "green peak" red shift, red edge position blue shift, but the changes in "red valley" and near-infrared position were complicated; (4) CARI, RES, REA and REDVI were highly relevant to pest levels, and the correlations between REP, RERVI, RENDVI and pest level were weak; (5) the multiple linear regression model with the variables of the 7 detection parameters could effectively detect the pest levels of Dendrolimus punctatus Walker, with both the estimation rate and accuracy above 0.85.

  19. Rotation-invariant features for multi-oriented text detection in natural images.

    Directory of Open Access Journals (Sweden)

    Cong Yao

    Full Text Available Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes.

  20. Subtidal Bathymetric Changes by Shoreline Armoring Removal and Restoration Projects

    Science.gov (United States)

    Wallace, J.

    2016-12-01

    The Salish Sea, a region with a diverse coastline, is altered by anthropogenic shoreline modifications such as seawalls. In recent years, local organizations have moved to restore these shorelines. Current research monitors the changes restoration projects have on the upper beach, lower beach, and intertidal, however little research exists to record possible negative effects on the subtidal. The purpose of this research is to utilize multibeam sonar bathymetric data to analyze possible changes to the seafloor structure of the subtidal in response to shoreline modification and to investigate potential ecosystem consequences of shoreline alteration. The subtidal is home to several species including eelgrass (Zostera marina). Eelgrass is an important species in Puget Sound as it provides many key ecosystem functions including providing habitat for a wide variety of organisms, affecting the physics of waves, and sediment transport in the subtidal. Thus bathymetric changes could impact eelgrass growth and reduce its ability to provide crucial ecosystem services. Three Washington state study sites of completed shoreline restoration projects were used to generate data from areas of varied topographic classification, Seahurst Park in Burien, the Snohomish County Nearshore Restoration Project in Everett, and Cornet Bay State Park on Whidbey Island. Multibeam sonar data was acquired using a Konsberg EM 2040 system and post-processed in Caris HIPS to generate a base surface of one-meter resolution. It was then imported into the ArcGIS software suite for the generation of spatial metrics. Measurements of change were calculated through a comparison of historical and generated data. Descriptive metrics generated included, total elevation change, percent area changed, and a transition matrix of positive and negative change. Additionally, pattern metrics such as, surface roughness, and Bathymetric Position Index (BPI), were calculated. The comparison of historical data to new data

  1. The use of bathymetric data in society and science: a review from the Baltic Sea.

    Science.gov (United States)

    Hell, Benjamin; Broman, Barry; Jakobsson, Lars; Jakobsson, Martin; Magnusson, Ake; Wiberg, Patrik

    2012-03-01

    Bathymetry, the underwater topography, is a fundamental property of oceans, seas, and lakes. As such it is important for a wide range of applications, like physical oceanography, marine geology, geophysics and biology or the administration of marine resources. The exact requirements users may have regarding bathymetric data are, however, unclear. Here, the results of a questionnaire survey and a literature review are presented, concerning the use of Baltic Sea bathymetric data in research and for societal needs. It is demonstrated that there is a great need for detailed bathymetric data. Despite the abundance of high-quality bathymetric data that are produced for safety of navigation purposes, the digital bathymetric models publicly available to date cannot satisfy this need. Our study shows that DBMs based on data collected for safety of navigation could substantially improve the base data for administrative decision making as well as the possibilities for marine research in the Baltic Sea.

  2. Near-Duplicate Web Page Detection: An Efficient Approach Using Clustering, Sentence Feature and Fingerprinting

    Directory of Open Access Journals (Sweden)

    J. Prasanna Kumar

    2013-02-01

    Full Text Available Duplicate and near-duplicate web pages are the chief concerns for web search engines. In reality, they incur enormous space to store the indexes, ultimately slowing down and increasing the cost of serving results. A variety of techniques have been developed to identify pairs of web pages that are aldquo;similarardquo; to each other. The problem of finding near-duplicate web pages has been a subject of research in the database and web-search communities for some years. In order to identify the near duplicate web pages, we make use of sentence level features along with fingerprinting method. When a large number of web documents are in consideration for the detection of web pages, then at first, we use K-mode clustering and subsequently sentence feature and fingerprint comparison is used. Using these steps, we exactly identify the near duplicate web pages in an efficient manner. The experimentation is carried out on the web page collections and the results ensured the efficiency of the proposed approach in detecting the near duplicate web pages.

  3. Pre-trained convolutional neural networks as feature extractors for tuberculosis detection.

    Science.gov (United States)

    Lopes, U K; Valiati, J F

    2017-10-01

    It is estimated that in 2015, approximately 1.8 million people infected by tuberculosis died, most of them in developing countries. Many of those deaths could have been prevented if the disease had been detected at an earlier stage, but the most advanced diagnosis methods are still cost prohibitive for mass adoption. One of the most popular tuberculosis diagnosis methods is the analysis of frontal thoracic radiographs; however, the impact of this method is diminished by the need for individual analysis of each radiography by properly trained radiologists. Significant research can be found on automating diagnosis by applying computational techniques to medical images, thereby eliminating the need for individual image analysis and greatly diminishing overall costs. In addition, recent improvements on deep learning accomplished excellent results classifying images on diverse domains, but its application for tuberculosis diagnosis remains limited. Thus, the focus of this work is to produce an investigation that will advance the research in the area, presenting three proposals to the application of pre-trained convolutional neural networks as feature extractors to detect the disease. The proposals presented in this work are implemented and compared to the current literature. The obtained results are competitive with published works demonstrating the potential of pre-trained convolutional networks as medical image feature extractors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A feature matching and fusion-based positive obstacle detection algorithm for field autonomous land vehicles

    Directory of Open Access Journals (Sweden)

    Tao Wu

    2017-03-01

    Full Text Available Positive obstacles will cause damage to field robotics during traveling in field. Field autonomous land vehicle is a typical field robotic. This article presents a feature matching and fusion-based algorithm to detect obstacles using LiDARs for field autonomous land vehicles. There are three main contributions: (1 A novel setup method of compact LiDAR is introduced. This method improved the LiDAR data density and reduced the blind region of the LiDAR sensor. (2 A mathematical model is deduced under this new setup method. The ideal scan line is generated by using the deduced mathematical model. (3 Based on the proposed mathematical model, a feature matching and fusion (FMAF-based algorithm is presented in this article, which is employed to detect obstacles. Experimental results show that the performance of the proposed algorithm is robust and stable, and the computing time is reduced by an order of two magnitudes by comparing with other exited algorithms. This algorithm has been perfectly applied to our autonomous land vehicle, which has won the champion in the challenge of Chinese “Overcome Danger 2014” ground unmanned vehicle.

  5. Spinal focal lesion detection in multiple myeloma using multimodal image features

    Science.gov (United States)

    Fränzle, Andrea; Hillengass, Jens; Bendl, Rolf

    2015-03-01

    Multiple myeloma is a tumor disease in the bone marrow that affects the skeleton systemically, i.e. multiple lesions can occur in different sites in the skeleton. To quantify overall tumor mass for determining degree of disease and for analysis of therapy response, volumetry of all lesions is needed. Since the large amount of lesions in one patient impedes manual segmentation of all lesions, quantification of overall tumor volume is not possible until now. Therefore development of automatic lesion detection and segmentation methods is necessary. Since focal tumors in multiple myeloma show different characteristics in different modalities (changes in bone structure in CT images, hypointensity in T1 weighted MR images and hyperintensity in T2 weighted MR images), multimodal image analysis is necessary for the detection of focal tumors. In this paper a pattern recognition approach is presented that identifies focal lesions in lumbar vertebrae based on features from T1 and T2 weighted MR images. Image voxels within bone are classified using random forests based on plain intensities and intensity value derived features (maximum, minimum, mean, median) in a 5 x 5 neighborhood around a voxel from both T1 and T2 weighted MR images. A test data sample of lesions in 8 lumbar vertebrae from 4 multiple myeloma patients can be classified at an accuracy of 95% (using a leave-one-patient-out test). The approach provides a reasonable delineation of the example lesions. This is an important step towards automatic tumor volume quantification in multiple myeloma.

  6. Feature Extraction For Application of Heart Abnormalities Detection Through Iris Based on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Entin Martiana Kusumaningtyas

    2018-01-01

    Full Text Available As the WHO says, heart disease is the leading cause of death and examining it by current methods in hospitals is not cheap. Iridology is one of the most popular alternative ways to detect the condition of organs. Iridology is the science that enables a health practitioner or non-expert to study signs in the iris that are capable of showing abnormalities in the body, including basic genetics, toxin deposition, circulation of dams, and other weaknesses. Research on computer iridology has been done before. One is about the computer's iridology system to detect heart conditions. There are several stages such as capture eye base on target, pre-processing, cropping, segmentation, feature extraction and classification using Thresholding algorithms. In this study, feature extraction process performed using binarization method by transforming the image into black and white. In this process we compare the two approaches of binarization method, binarization based on grayscale images and binarization based on proximity. The system we proposed was tested at Mugi Barokah Clinic Surabaya.  We conclude that the image grayscale approach performs better classification than using proximity.

  7. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  8. A biologically inspired scale-space for illumination invariant feature detection

    International Nuclear Information System (INIS)

    Vonikakis, Vasillios; Chrysostomou, Dimitrios; Kouskouridas, Rigas; Gasteratos, Antonios

    2013-01-01

    This paper presents a new illumination invariant operator, combining the nonlinear characteristics of biological center-surround cells with the classic difference of Gaussians operator. It specifically targets the underexposed image regions, exhibiting increased sensitivity to low contrast, while not affecting performance in the correctly exposed ones. The proposed operator can be used to create a scale-space, which in turn can be a part of a SIFT-based detector module. The main advantage of this illumination invariant scale-space is that, using just one global threshold, keypoints can be detected in both dark and bright image regions. In order to evaluate the degree of illumination invariance that the proposed, as well as other, existing, operators exhibit, a new benchmark dataset is introduced. It features a greater variety of imaging conditions, compared to existing databases, containing real scenes under various degrees and combinations of uniform and non-uniform illumination. Experimental results show that the proposed detector extracts a greater number of features, with a high level of repeatability, compared to other approaches, for both uniform and non-uniform illumination. This, along with its simple implementation, renders the proposed feature detector particularly appropriate for outdoor vision systems, working in environments under uncontrolled illumination conditions. (paper)

  9. Statistical Feature Extraction for Fault Locations in Nonintrusive Fault Detection of Low Voltage Distribution Systems

    Directory of Open Access Journals (Sweden)

    Hsueh-Hsien Chang

    2017-04-01

    Full Text Available This paper proposes statistical feature extraction methods combined with artificial intelligence (AI approaches for fault locations in non-intrusive single-line-to-ground fault (SLGF detection of low voltage distribution systems. The input features of the AI algorithms are extracted using statistical moment transformation for reducing the dimensions of the power signature inputs measured by using non-intrusive fault monitoring (NIFM techniques. The data required to develop the network are generated by simulating SLGF using the Electromagnetic Transient Program (EMTP in a test system. To enhance the identification accuracy, these features after normalization are given to AI algorithms for presenting and evaluating in this paper. Different AI techniques are then utilized to compare which identification algorithms are suitable to diagnose the SLGF for various power signatures in a NIFM system. The simulation results show that the proposed method is effective and can identify the fault locations by using non-intrusive monitoring techniques for low voltage distribution systems.

  10. A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection.

    Science.gov (United States)

    Ceccarelli, Michele; d'Acierno, Antonio; Facchiano, Angelo

    2009-10-15

    Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. peaks) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics. We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962. We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from http://medeaserver.isa.cnr.it/dacierno/spectracode.htm.

  11. Detection of microsleep events in a car driving simulation study using electrocardiographic features

    Directory of Open Access Journals (Sweden)

    Lenis Gustavo

    2016-09-01

    Full Text Available Microsleep events (MSE are short intrusions of sleep under the demand of sustained attention. They can impose a major threat to safety while driving a car and are considered one of the most significant causes of traffic accidents. Driver’s fatigue and MSE account for up to 20% of all car crashes in Europe and at least 100,000 accidents in the US every year. Unfortunately, there is not a standardized test developed to quantify the degree of vigilance of a driver. To account for this problem, different approaches based on biosignal analysis have been studied in the past. In this paper, we investigate an electrocardiographic-based detection of MSE using morphological and rhythmical features. 14 records from a car driving simulation study with a high incidence of MSE were analyzed and the behavior of the ECG features before and after an MSE in relation to reference baseline values (without drowsiness were investigated. The results show that MSE cannot be detected (or predicted using only the ECG. However, in the presence of MSE, the rhythmical and morphological features were observed to be significantly different than the ones calculated for the reference signal without sleepiness. In particular, when MSE were present, the heart rate diminished while the heart rate variability increased. Time distances between P wave and R peak, and R peak and T wave and their dispersion increased also. This demonstrates a noticeable change of the autonomous regulation of the heart. In future, the ECG parameter could be used as a surrogate measure of fatigue.

  12. Statistical methods for detecting differentially abundant features in clinical metagenomic samples.

    Directory of Open Access Journals (Sweden)

    James Robert White

    2009-04-01

    Full Text Available Numerous studies are currently underway to characterize the microbial communities inhabiting our world. These studies aim to dramatically expand our understanding of the microbial biosphere and, more importantly, hope to reveal the secrets of the complex symbiotic relationship between us and our commensal bacterial microflora. An important prerequisite for such discoveries are computational tools that are able to rapidly and accurately compare large datasets generated from complex bacterial communities to identify features that distinguish them.We present a statistical method for comparing clinical metagenomic samples from two treatment populations on the basis of count data (e.g. as obtained through sequencing to detect differentially abundant features. Our method, Metastats, employs the false discovery rate to improve specificity in high-complexity environments, and separately handles sparsely-sampled features using Fisher's exact test. Under a variety of simulations, we show that Metastats performs well compared to previously used methods, and significantly outperforms other methods for features with sparse counts. We demonstrate the utility of our method on several datasets including a 16S rRNA survey of obese and lean human gut microbiomes, COG functional profiles of infant and mature gut microbiomes, and bacterial and viral metabolic subsystem data inferred from random sequencing of 85 metagenomes. The application of our method to the obesity dataset reveals differences between obese and lean subjects not reported in the original study. For the COG and subsystem datasets, we provide the first statistically rigorous assessment of the differences between these populations. The methods described in this paper are the first to address clinical metagenomic datasets comprising samples from multiple subjects. Our methods are robust across datasets of varied complexity and sampling level. While designed for metagenomic applications, our software

  13. Intrusion detection model using fusion of chi-square feature selection and multi class SVM

    Directory of Open Access Journals (Sweden)

    Ikram Sumaiya Thaseen

    2017-10-01

    Full Text Available Intrusion detection is a promising area of research in the domain of security with the rapid development of internet in everyday life. Many intrusion detection systems (IDS employ a sole classifier algorithm for classifying network traffic as normal or abnormal. Due to the large amount of data, these sole classifier models fail to achieve a high attack detection rate with reduced false alarm rate. However by applying dimensionality reduction, data can be efficiently reduced to an optimal set of attributes without loss of information and then classified accurately using a multi class modeling technique for identifying the different network attacks. In this paper, we propose an intrusion detection model using chi-square feature selection and multi class support vector machine (SVM. A parameter tuning technique is adopted for optimization of Radial Basis Function kernel parameter namely gamma represented by ‘ϒ’ and over fitting constant ‘C’. These are the two important parameters required for the SVM model. The main idea behind this model is to construct a multi class SVM which has not been adopted for IDS so far to decrease the training and testing time and increase the individual classification accuracy of the network attacks. The investigational results on NSL-KDD dataset which is an enhanced version of KDDCup 1999 dataset shows that our proposed approach results in a better detection rate and reduced false alarm rate. An experimentation on the computational time required for training and testing is also carried out for usage in time critical applications.

  14. Network Traffic Features for Anomaly Detection in Specific Industrial Control System Network

    Directory of Open Access Journals (Sweden)

    Matti Mantere

    2013-09-01

    Full Text Available The deterministic and restricted nature of industrial control system networks sets them apart from more open networks, such as local area networks in office environments. This improves the usability of network security, monitoring approaches that would be less feasible in more open environments. One of such approaches is machine learning based anomaly detection. Without proper customization for the special requirements of the industrial control system network environment, many existing anomaly or misuse detection systems will perform sub-optimally. A machine learning based approach could reduce the amount of manual customization required for different industrial control system networks. In this paper we analyze a possible set of features to be used in a machine learning based anomaly detection system in the real world industrial control system network environment under investigation. The network under investigation is represented by architectural drawing and results derived from network trace analysis. The network trace is captured from a live running industrial process control network and includes both control data and the data flowing between the control network and the office network. We limit the investigation to the IP traffic in the traces.

  15. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    Directory of Open Access Journals (Sweden)

    P. Amudha

    2015-01-01

    Full Text Available Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC with Enhanced Particle Swarm Optimization (EPSO to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup’99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.

  16. MixDroid: A multi-features and multi-classifiers bagging system for Android malware detection

    Science.gov (United States)

    Huang, Weiqing; Hou, Erhang; Zheng, Liang; Feng, Weimiao

    2018-05-01

    In the past decade, Android platform has rapidly taken over the mobile market for its superior convenience and open source characteristics. However, with the popularity of Android, malwares targeting on Android devices are increasing rapidly, while the conventional rule-based and expert-experienced approaches are no longer able to handle such explosive growth. In this paper, combining with the theory of natural language processing and machine learning, we not only implement the basic feature extraction of permission application features, but also propose two innovative schemes of feature extraction: Dalvik opcode features and malicious code image, and implement an automatic Android malware detection system MixDroid which is based on multi-features and multi-classifiers. According to our experiment results on 20,000 Android applications, detection accuracy of MixDroid is 98.1%, which proves our schemes' effectiveness in Android malware detection.

  17. Towards real-time detection and tracking of spatio-temporal features: Blob-filaments in fusion plasma

    International Nuclear Information System (INIS)

    Wu, Lingfei; Wu, Kesheng; Sim, Alex; Churchill, Michael; Choi, Jong Youl

    2016-01-01

    A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. Here, on a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.

  18. Implementation of a FPGA-Based Feature Detection and Networking System for Real-time Traffic Monitoring

    OpenAIRE

    Chen, Jieshi; Schafer, Benjamin Carrion; Ho, Ivan Wang-Hei

    2016-01-01

    With the growing demand of real-time traffic monitoring nowadays, software-based image processing can hardly meet the real-time data processing requirement due to the serial data processing nature. In this paper, the implementation of a hardware-based feature detection and networking system prototype for real-time traffic monitoring as well as data transmission is presented. The hardware architecture of the proposed system is mainly composed of three parts: data collection, feature detection,...

  19. Memory-based detection of rare sound feature combinations in anesthetized rats.

    Science.gov (United States)

    Astikainen, Piia; Ruusuvirta, Timo; Wikgren, Jan; Penttonen, Markku

    2006-10-02

    It is unclear whether the ability of the brain to discriminate rare from frequently repeated combinations of sound features is limited to the normal sleep/wake cycle. We recorded epidural auditory event-related potentials in urethane-anesthetized rats presented with rare tones ('deviants') interspersed with frequently repeated ones ('standards'). Deviants differed from standards either in frequency alone or in frequency combined with intensity. In both cases, deviants elicited event-related potentials exceeding in amplitude event-related potentials to standards between 76 and 108 ms from the stimulus onset, suggesting the independence of the underlying integrative and memory-based change detection mechanisms of the brain from the normal sleep/wake cycle. The relations of these event-related potentials to mismatch negativity and N1 in humans are addressed.

  20. Nonlinear features identified by Volterra series for damage detection in a buckled beam

    Directory of Open Access Journals (Sweden)

    Shiki S. B.

    2014-01-01

    Full Text Available The present paper proposes a new index for damage detection based on nonlinear features extracted from prediction errors computed by multiple convolutions using the discrete-time Volterra series. A reference Volterra model is identified with data in the healthy condition and used for monitoring the system operating with linear or nonlinear behavior. When the system has some structural change, possibly associated with damage, the index metrics computed could give an alert to separate the linear and nonlinear contributions, besides provide a diagnostic about the structural state. To show the applicability of the method, an experimental test is performed using nonlinear vibration signals measured in a clamped buckled beam subject to different levels of force applied and with simulated damages through discontinuities inserted in the beam surface.

  1. Multi-feature classifiers for burst detection in single EEG channels from preterm infants

    Science.gov (United States)

    Navarro, X.; Porée, F.; Kuchenbuch, M.; Chavez, M.; Beuchée, Alain; Carrault, G.

    2017-08-01

    Objective. The study of electroencephalographic (EEG) bursts in preterm infants provides valuable information about maturation or prognostication after perinatal asphyxia. Over the last two decades, a number of works proposed algorithms to automatically detect EEG bursts in preterm infants, but they were designed for populations under 35 weeks of post menstrual age (PMA). However, as the brain activity evolves rapidly during postnatal life, these solutions might be under-performing with increasing PMA. In this work we focused on preterm infants reaching term ages (PMA  ⩾36 weeks) using multi-feature classification on a single EEG channel. Approach. Five EEG burst detectors relying on different machine learning approaches were compared: logistic regression (LR), linear discriminant analysis (LDA), k-nearest neighbors (kNN), support vector machines (SVM) and thresholding (Th). Classifiers were trained by visually labeled EEG recordings from 14 very preterm infants (born after 28 weeks of gestation) with 36-41 weeks PMA. Main results. The most performing classifiers reached about 95% accuracy (kNN, SVM and LR) whereas Th obtained 84%. Compared to human-automatic agreements, LR provided the highest scores (Cohen’s kappa  =  0.71) using only three EEG features. Applying this classifier in an unlabeled database of 21 infants  ⩾36 weeks PMA, we found that long EEG bursts and short inter-burst periods are characteristic of infants with the highest PMA and weights. Significance. In view of these results, LR-based burst detection could be a suitable tool to study maturation in monitoring or portable devices using a single EEG channel.

  2. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  3. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  4. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    Science.gov (United States)

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  5. Machine Fault Detection Based on Filter Bank Similarity Features Using Acoustic and Vibration Analysis

    Directory of Open Access Journals (Sweden)

    Mauricio Holguín-Londoño

    2016-01-01

    Full Text Available Vibration and acoustic analysis actively support the nondestructive and noninvasive fault diagnostics of rotating machines at early stages. Nonetheless, the acoustic signal is less used because of its vulnerability to external interferences, hindering an efficient and robust analysis for condition monitoring (CM. This paper presents a novel methodology to characterize different failure signatures from rotating machines using either acoustic or vibration signals. Firstly, the signal is decomposed into several narrow-band spectral components applying different filter bank methods such as empirical mode decomposition, wavelet packet transform, and Fourier-based filtering. Secondly, a feature set is built using a proposed similarity measure termed cumulative spectral density index and used to estimate the mutual statistical dependence between each bandwidth-limited component and the raw signal. Finally, a classification scheme is carried out to distinguish the different types of faults. The methodology is tested in two laboratory experiments, including turbine blade degradation and rolling element bearing faults. The robustness of our approach is validated contaminating the signal with several levels of additive white Gaussian noise, obtaining high-performance outcomes that make the usage of vibration, acoustic, and vibroacoustic measurements in different applications comparable. As a result, the proposed fault detection based on filter bank similarity features is a promising methodology to implement in CM of rotating machinery, even using measurements with low signal-to-noise ratio.

  6. DETECTION OF SHARP SYMMETRIC FEATURES IN THE CIRCUMBINARY DISK AROUND AK Sco

    International Nuclear Information System (INIS)

    Janson, Markus; Asensio-Torres, Ruben; Thalmann, Christian; Meyer, Michael R.; Garufi, Antonio; Boccaletti, Anthony; Maire, Anne-Lise; Henning, Thomas; Pohl, Adriana; Zurlo, Alice; Marzari, Francesco; Carson, Joseph C.; Augereau, Jean-Charles; Desidera, Silvano

    2016-01-01

    The Search for Planets Orbiting Two Stars survey aims to study the formation and distribution of planets in binary systems by detecting and characterizing circumbinary planets and their formation environments through direct imaging. With the SPHERE Extreme Adaptive Optics instrument, a good contrast can be achieved even at small (<300 mas) separations from bright stars, which enables studies of planets and disks in a separation range that was previously inaccessible. Here, we report the discovery of resolved scattered light emission from the circumbinary disk around the well-studied young double star AK Sco, at projected separations in the ∼13–40 AU range. The sharp morphology of the imaged feature is surprising, given the smooth appearance of the disk in its spectral energy distribution. We show that the observed morphology can be represented either as a highly eccentric ring around AK Sco, or as two separate spiral arms in the disk, wound in opposite directions. The relative merits of these interpretations are discussed, as well as whether these features may have been caused by one or several circumbinary planets interacting with the disk

  7. Feature recognition and detection for ancient architecture based on machine vision

    Science.gov (United States)

    Zou, Zheng; Wang, Niannian; Zhao, Peng; Zhao, Xuefeng

    2018-03-01

    Ancient architecture has a very high historical and artistic value. The ancient buildings have a wide variety of textures and decorative paintings, which contain a lot of historical meaning. Therefore, the research and statistics work of these different compositional and decorative features play an important role in the subsequent research. However, until recently, the statistics of those components are mainly by artificial method, which consumes a lot of labor and time, inefficiently. At present, as the strong support of big data and GPU accelerated training, machine vision with deep learning as the core has been rapidly developed and widely used in many fields. This paper proposes an idea to recognize and detect the textures, decorations and other features of ancient building based on machine vision. First, classify a large number of surface textures images of ancient building components manually as a set of samples. Then, using the convolution neural network to train the samples in order to get a classification detector. Finally verify its precision.

  8. DETECTION OF SHARP SYMMETRIC FEATURES IN THE CIRCUMBINARY DISK AROUND AK Sco

    Energy Technology Data Exchange (ETDEWEB)

    Janson, Markus; Asensio-Torres, Ruben [Department of Astronomy, Stockholm University, AlbaNova University Center, SE-106 91 Stockholm (Sweden); Thalmann, Christian; Meyer, Michael R.; Garufi, Antonio [Institute for Astronomy, ETH Zurich, Wolfgang-Pauli-Strasse 27, CH-8093 Zurich (Switzerland); Boccaletti, Anthony [LESIA, Observatoire de Paris—Meudon, CNRS, Université Pierre et Marie Curie, Université Paris Didierot, 5 Place Jules Janssen, F-92195 Meudon (France); Maire, Anne-Lise; Henning, Thomas; Pohl, Adriana [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Zurlo, Alice [Núcleo de Astronomía, Facultad de Ingeniería, Universidad Diego Portales, Av. Ejercito 441, Santiago (Chile); Marzari, Francesco [Dipartimento di Fisica, University of Padova, Via Marzolo 8, I-35131 Padova (Italy); Carson, Joseph C. [Department of Physics and Astronomy, College of Charleston, 66 George Street, Charleston, SC 29424 (United States); Augereau, Jean-Charles [Université Grenoble Alpes, IPAG, F-38000 Grenoble (France); Desidera, Silvano [INAF—Osservatorio Astromonico di Padova, Vicolo dell’Osservatorio 5, I-35122 Padova (Italy)

    2016-01-01

    The Search for Planets Orbiting Two Stars survey aims to study the formation and distribution of planets in binary systems by detecting and characterizing circumbinary planets and their formation environments through direct imaging. With the SPHERE Extreme Adaptive Optics instrument, a good contrast can be achieved even at small (<300 mas) separations from bright stars, which enables studies of planets and disks in a separation range that was previously inaccessible. Here, we report the discovery of resolved scattered light emission from the circumbinary disk around the well-studied young double star AK Sco, at projected separations in the ∼13–40 AU range. The sharp morphology of the imaged feature is surprising, given the smooth appearance of the disk in its spectral energy distribution. We show that the observed morphology can be represented either as a highly eccentric ring around AK Sco, or as two separate spiral arms in the disk, wound in opposite directions. The relative merits of these interpretations are discussed, as well as whether these features may have been caused by one or several circumbinary planets interacting with the disk.

  9. Color-based scale-invariant feature detection applied in robot vision

    Science.gov (United States)

    Gao, Jian; Huang, Xinhan; Peng, Gang; Wang, Min; Li, Xinde

    2007-11-01

    The scale-invariant feature detecting methods always require a lot of computation yet sometimes still fail to meet the real-time demands in robot vision fields. To solve the problem, a quick method for detecting interest points is presented. To decrease the computation time, the detector selects as interest points those whose scale normalized Laplacian values are the local extrema in the nonholonomic pyramid scale space. The descriptor is built with several subregions, whose width is proportional to the scale factor, and the coordinates of the descriptor are rotated in relation to the interest point orientation just like the SIFT descriptor. The eigenvector is computed in the original color image and the mean values of the normalized color g and b in each subregion are chosen to be the factors of the eigenvector. Compared with the SIFT descriptor, this descriptor's dimension has been reduced evidently, which can simplify the point matching process. The performance of the method is analyzed in theory in this paper and the experimental results have certified its validity too.

  10. Diagnostic performance of 3D standing CT imaging for detection of knee osteoarthritis features.

    Science.gov (United States)

    Segal, Neil A; Nevitt, Michael C; Lynch, John A; Niu, Jingbo; Torner, James C; Guermazi, Ali

    2015-07-01

    To determine the diagnostic performance of standing computerized tomography (SCT) of the knee for osteophytes and subchondral cysts compared with fixed-flexion radiography, using MRI as the reference standard. Twenty participants were recruited from the Multicenter Osteoarthritis Study. Participants' knees were imaged with SCT while standing in a knee-positioning frame, and with postero-anterior fixed-flexion radiography and 1T MRI. Medial and lateral marginal osteophytes and subchondral cysts were scored on bilateral radiographs and coronal SCT images using the OARSI grading system and on coronal MRI using Whole Organ MRI Scoring. Imaging modalities were read separately with images in random order. Sensitivity, specificity and accuracy for the detection of lesions were calculated and differences between modalities were tested using McNemar's test. Participants' mean age was 66.8 years, body mass index was 29.6 kg/m(2) and 50% were women. Of the 160 surfaces (medial and lateral femur and tibia for 40 knees), MRI revealed 84 osteophytes and 10 subchondral cysts. In comparison with osteophytes and subchondral cysts detected by MRI, SCT was significantly more sensitive (93 and 100%; p osteophytes) than plain radiographs (sensitivity 60 and 10% and accuracy 79 and 94%, respectively). For osteophytes, differences in sensitivity and accuracy were greatest at the medial femur (p = 0.002). In comparison with MRI, SCT imaging was more sensitive and accurate for detection of osteophytes and subchondral cysts than conventional fixed-flexion radiography. Additional study is warranted to assess diagnostic performance of SCT measures of joint space width, progression of OA features and the patellofemoral joint.

  11. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  12. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  13. P2-18: Temporal and Featural Separation of Memory Items Play Little Role for VSTM-Based Change Detection

    Directory of Open Access Journals (Sweden)

    Dae-Gyu Kim

    2012-10-01

    Full Text Available Classic studies of visual short-term memory (VSTM found that presenting memory items either sequentially or simultaneously does not affect recognition accuracy of the remembered items. Other studies also suggest that capacity of VSTM benefits from formation of bound object-based representations leading to no cost of remembering multi-feature items. According to these ideas, we aimed to examine the role of temporal and featural separation of memory items in VSTM change detection, (1 if sample items are separated across different temporal moments and (2 if across different feature dimensions. In a series of change detection experiments, we asked participants to report a change between a sample and a test display with a brief delay in between. In experiment 1, the sample items were split into two sets with a different onset time. In experiment 2, the sample items were split across two different feature dimensions (e.g., half color and half orientation. The change detection accuracy in Experiment 1 showed no substantial drop when the memory items were separated into two onset groups compared to simultaneous onset. The accuracy did not drop either when the features of sample items were split across two different feature groups compared to when were not split. The results indicate that temporal and featural separation of VWM items does not play a significant role for VSTM-based change detection.

  14. Predicting species diversity of benthic communities within turbid nearshore using full-waveform bathymetric LiDAR and machine learners.

    Directory of Open Access Journals (Sweden)

    Antoine Collin

    Full Text Available Epi-macrobenthic species richness, abundance and composition are linked with type, assemblage and structural complexity of seabed habitat within coastal ecosystems. However, the evaluation of these habitats is highly hindered by limitations related to both waterborne surveys (slow acquisition, shallow water and low reactivity and water clarity (turbid for most coastal areas. Substratum type/diversity and bathymetric features were elucidated using a supervised method applied to airborne bathymetric LiDAR waveforms over Saint-Siméon-Bonaventure's nearshore area (Gulf of Saint-Lawrence, Québec, Canada. High-resolution underwater photographs were taken at three hundred stations across an 8-km(2 study area. Seven models based upon state-of-the-art machine learning techniques such as Naïve Bayes, Regression Tree, Classification Tree, C 4.5, Random Forest, Support Vector Machine, and CN2 learners were tested for predicting eight epi-macrobenthic species diversity metrics as a function of the class number. The Random Forest outperformed other models with a three-discretized Simpson index applied to epi-macrobenthic communities, explaining 69% (Classification Accuracy of its variability by mean bathymetry, time range and skewness derived from the LiDAR waveform. Corroborating marine ecological theory, areas with low Simpson epi-macrobenthic diversity responded to low water depths, high skewness and time range, whereas higher Simpson diversity relied upon deeper bottoms (correlated with stronger hydrodynamics and low skewness and time range. The degree of species heterogeneity was therefore positively linked with the degree of the structural complexity of the benthic cover. This work underpins that fully exploited bathymetric LiDAR (not only bathymetrically derived by-products, coupled with proficient machine learner, is able to rapidly predict habitat characteristics at a spatial resolution relevant to epi-macrobenthos diversity, ranging from clear to

  15. Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods

    Directory of Open Access Journals (Sweden)

    Huan Ni

    2016-09-01

    Full Text Available This paper presents an automated and effective method for detecting 3D edges and tracing feature lines from 3D-point clouds. This method is named Analysis of Geometric Properties of Neighborhoods (AGPN, and it includes two main steps: edge detection and feature line tracing. In the edge detection step, AGPN analyzes geometric properties of each query point’s neighborhood, and then combines RANdom SAmple Consensus (RANSAC and angular gap metric to detect edges. In the feature line tracing step, feature lines are traced by a hybrid method based on region growing and model fitting in the detected edges. Our approach is experimentally validated on complex man-made objects and large-scale urban scenes with millions of points. Comparative studies with state-of-the-art methods demonstrate that our method obtains a promising, reliable, and high performance in detecting edges and tracing feature lines in 3D-point clouds. Moreover, AGPN is insensitive to the point density of the input data.

  16. Genetic Particle Swarm Optimization–Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection

    Science.gov (United States)

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-01-01

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm. PMID:27483285

  17. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    Science.gov (United States)

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-07-30

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.

  18. International Bathymetric Chart of the Arctic Ocean, Version 2.23

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The goal of this initiative is to develop a digital data base that contains all available bathymetric data north of 64 degrees North, for use by mapmakers,...

  19. Using a personal watercraft for monitoring bathymetric changes at storm scale

    NARCIS (Netherlands)

    Van Son, S.T.J.; Lindenbergh, R.C.; De Schipper, M.A.; De Vries, S.; Duijnmayer, K.

    2009-01-01

    Monitoring and understanding coastal processes is important for the Netherlands since the most densely populated areas are situated directly behind the coastal defense. Traditionally, bathymetric changes are monitored at annual intervals, although nowadays it is understood that most dramatic changes

  20. Magnetic and bathymetric investigations over the Vema Region of the Central Indian Ridge: Tectonic implications

    Digital Repository Service at National Institute of Oceanography (India)

    Drolia, R.K.; Ghose, I.; Subrahmanyam, A.S.; Rao, M.M.M.; Kessarkar, P.M.; Murthy, K.S.R.

    Honeywell Elac narrowbeam echosounder. Post-cruise proces- sing involved digitisation of echograms, interpolation of data at 1 min intervals and merging of the magnetic field intensity data with the bathymetric data. Math- ew’s correction was applied...

  1. Modeling and Analysis of Integrated Bathymetric and Geodetic Data for Inventory Surveys of Mining Water Reservoirs

    Directory of Open Access Journals (Sweden)

    Ochałek Agnieszka

    2018-01-01

    Full Text Available The significant part of the hydrography is bathymetry, which is the empirical part of it. Bathymetry is the study of underwater depth of waterways and reservoirs, and graphic presentation of measured data in form of bathymetric maps, cross-sections and three-dimensional bottom models. The bathymetric measurements are based on using Global Positioning System and devices for hydrographic measurements – an echo sounder and a side sonar scanner. In this research authors focused on introducing the case of obtaining and processing the bathymetrical data, building numerical bottom models of two post-mining reclaimed water reservoirs: Dwudniaki Lake in Wierzchosławice and flooded quarry in Zabierzów. The report includes also analysing data from still operating mining water reservoirs located in Poland to depict how bathymetry can be used in mining industry. The significant issue is an integration of bathymetrical data and geodetic data from tachymetry, terrestrial laser scanning measurements.

  2. Modeling and Analysis of Integrated Bathymetric and Geodetic Data for Inventory Surveys of Mining Water Reservoirs

    Science.gov (United States)

    Ochałek, Agnieszka; Lipecki, Tomasz; Jaśkowski, Wojciech; Jabłoński, Mateusz

    2018-03-01

    The significant part of the hydrography is bathymetry, which is the empirical part of it. Bathymetry is the study of underwater depth of waterways and reservoirs, and graphic presentation of measured data in form of bathymetric maps, cross-sections and three-dimensional bottom models. The bathymetric measurements are based on using Global Positioning System and devices for hydrographic measurements - an echo sounder and a side sonar scanner. In this research authors focused on introducing the case of obtaining and processing the bathymetrical data, building numerical bottom models of two post-mining reclaimed water reservoirs: Dwudniaki Lake in Wierzchosławice and flooded quarry in Zabierzów. The report includes also analysing data from still operating mining water reservoirs located in Poland to depict how bathymetry can be used in mining industry. The significant issue is an integration of bathymetrical data and geodetic data from tachymetry, terrestrial laser scanning measurements.

  3. CRED Fagatele Bay National Marine Sanctuary Bathymetric Position Index Habitat Structures 2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetric Position Index (BPI) Structures are derived from derivatives of Simrad EM-3000 multibeam bathymetry (1 m and 3 m resolution). BPI structures are...

  4. CRED Fagatele Bay National Marine Sanctuary Bathymetric Position Index Habitat Zones 2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetric Position Index (BPI) Zones derived from derivatives of Simrad EM-3000 multibeam bathymetry (3 m resolution). BPI zones are surficial characteristics of...

  5. Studies of high resolution array processing algorithms for multibeam bathymetric applications

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Schenke, H.W.

    In this paper a study is initiated to observe the usefulness of directional spectral estimation techniques for underwater bathymetric applications. High resolution techniques like the Maximum Likelihood (ML) method and the Maximum Entropy (ME...

  6. International Bathymetric Chart of the Arctic Ocean, Version 1.0

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The goal of this initiative is to develop a digital data base that contains all available bathymetric data north of 64 degrees North, for use by mapmakers,...

  7. Volcanic and Hydrothermal Activity of the North Su Volcano: New Insights from Repeated Bathymetric Surveys and ROV Observations

    Science.gov (United States)

    Thal, J.; Bach, W.; Tivey, M.; Yoerger, D.

    2013-12-01

    Bathymetric data from cruises in 2002, 2006, and 2011 were combined and compared to determine the evolution of volcanic activity, seafloor structures, erosional features and to identify and document the distribution of hydrothermal vents on North Su volcano, SuSu Knolls, eastern Manus Basin (Papua New Guinea). Geologic mapping based on ROV observations from 2006 (WHOI Jason-2) and 2011 (MARUM Quest-4000) combined with repeated bathymetric surveys from 2002 and 2011 are used to identify morphologic features on the slopes of North Su and to track temporal changes. ROV MARUM Quest-4000 bathymetry was used to develop a 10 m grid of the top of North Su to precisely depict recent changes. In 2006, the south slope of North Su was steeply sloped and featured numerous white smoker vents discharging acid sulfate waters. These vents were covered by several tens of meters of sand- to gravel-sized volcanic material in 2011. The growth of this new cone changed the bathymetry of the south flank of North Su up to ~50 m and emplaced ~0.014 km3 of clastic volcanic material. This material is primarily comprised of fractured altered dacite and massive fresh dacite as well as crystals of opx, cpx, olivine and plagioclase. There is no evidence for pyroclastic fragmentation, so we hypothesize that the fragmentation is likely related to hydrothermal explosions. Hydrothermal activity varies over a short (~50 m) lateral distance from 'flashing' black smokers to acidic white smoker vents. Within 2 weeks of observation time in 2011, the white smoker vents varied markedly in activity suggesting a highly episodic hydrothermal system. Based on ROV video recordings, we identified steeply sloping (up to 30°) slopes exposing pillars and walls of hydrothermal cemented volcaniclastic material representing former fluid upflow zones. These features show that hydrothermal activity has increased slope stability as hydrothermal cementation has prevented slope collapse. Additionally, in some places

  8. A multilevel-ROI-features-based machine learning method for detection of morphometric biomarkers in Parkinson's disease.

    Science.gov (United States)

    Peng, Bo; Wang, Suhong; Zhou, Zhiyong; Liu, Yan; Tong, Baotong; Zhang, Tao; Dai, Yakang

    2017-06-09

    Machine learning methods have been widely used in recent years for detection of neuroimaging biomarkers in regions of interest (ROIs) and assisting diagnosis of neurodegenerative diseases. The innovation of this study is to use multilevel-ROI-features-based machine learning method to detect sensitive morphometric biomarkers in Parkinson's disease (PD). Specifically, the low-level ROI features (gray matter volume, cortical thickness, etc.) and high-level correlative features (connectivity between ROIs) are integrated to construct the multilevel ROI features. Filter- and wrapper- based feature selection method and multi-kernel support vector machine (SVM) are used in the classification algorithm. T1-weighted brain magnetic resonance (MR) images of 69 PD patients and 103 normal controls from the Parkinson's Progression Markers Initiative (PPMI) dataset are included in the study. The machine learning method performs well in classification between PD patients and normal controls with an accuracy of 85.78%, a specificity of 87.79%, and a sensitivity of 87.64%. The most sensitive biomarkers between PD patients and normal controls are mainly distributed in frontal lobe, parental lobe, limbic lobe, temporal lobe, and central region. The classification performance of our method with multilevel ROI features is significantly improved comparing with other classification methods using single-level features. The proposed method shows promising identification ability for detecting morphometric biomarkers in PD, thus confirming the potentiality of our method in assisting diagnosis of the disease. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Feature-Based Change Detection Reveals Inconsistent Individual Differences in Visual Working Memory Capacity.

    Science.gov (United States)

    Ambrose, Joseph P; Wijeakumar, Sobanawartiny; Buss, Aaron T; Spencer, John P

    2016-01-01

    Visual working memory (VWM) is a key cognitive system that enables people to hold visual information in mind after a stimulus has been removed and compare past and present to detect changes that have occurred. VWM is severely capacity limited to around 3-4 items, although there are robust individual differences in this limit. Importantly, these individual differences are evident in neural measures of VWM capacity. Here, we capitalized on recent work showing that capacity is lower for more complex stimulus dimension. In particular, we asked whether individual differences in capacity remain consistent if capacity is shifted by a more demanding task, and, further, whether the correspondence between behavioral and neural measures holds across a shift in VWM capacity. Participants completed a change detection (CD) task with simple colors and complex shapes in an fMRI experiment. As expected, capacity was significantly lower for the shape dimension. Moreover, there were robust individual differences in behavioral estimates of VWM capacity across dimensions. Similarly, participants with a stronger BOLD response for color also showed a strong neural response for shape within the lateral occipital cortex, intraparietal sulcus (IPS), and superior IPS. Although there were robust individual differences in the behavioral and neural measures, we found little evidence of systematic brain-behavior correlations across feature dimensions. This suggests that behavioral and neural measures of capacity provide different views onto the processes that underlie VWM and CD. Recent theoretical approaches that attempt to bridge between behavioral and neural measures are well positioned to address these findings in future work.

  10. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-02-01

    Full Text Available Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples. Therefore, a presentation attack detection (PAD method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP, local ternary pattern (LTP, and histogram of oriented gradients (HOG. As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN method to extract deep image features and the multi-level local binary pattern (MLBP method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  11. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  12. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  13. Linear feature detection algorithm for astronomical surveys - II. Defocusing effects on meteor tracks

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan; Rasmussen, Andrew; Ivezić, Željko

    2018-03-01

    Given the current limited knowledge of meteor plasma micro-physics and its interaction with the surrounding atmosphere and ionosphere, meteors are a highly interesting observational target for high-resolution wide-field astronomical surveys. Such surveys are capable of resolving the physical size of meteor plasma heads, but they produce large volumes of images that need to be automatically inspected for possible existence of long linear features produced by meteors. Here, we show how big aperture sky survey telescopes detect meteors as defocused tracks with a central brightness depression. We derive an analytic expression for a defocused point source meteor track and use it to calculate brightness profiles of meteors modelled as uniform brightness discs. We apply our modelling to meteor images as seen by the Sloan Digital Sky Survey and Large Synoptic Survey Telescope telescopes. The expression is validated by Monte Carlo ray-tracing simulations of photons travelling through the atmosphere and the Large Synoptic Survey Telescope telescope optics. We show that estimates of the meteor distance and size can be extracted from the measured full width at half-maximum and the strength of the central dip in the observed brightness profile. However, this extraction becomes difficult when the defocused meteor track is distorted by the atmospheric seeing or contaminated by a long-lasting glowing meteor trail. The full width at half-maximum of satellite tracks is distinctly narrower than meteor values, which enables removal of a possible confusion between satellites and meteors.

  14. Effective Detection of Sub-Surface Archeological Features from Laser Scanning Point Clouds and Imagery Data

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Walczykowski, P.; Wierzbicki, D.; Delis, P.; Lada, A.

    2017-08-01

    The archaeological heritage is non-renewable, and any invasive research or other actions leading to the intervention of mechanical or chemical into the ground lead to the destruction of the archaeological site in whole or in part. For this reason, modern archeology is looking for alternative methods of non-destructive and non-invasive methods of new objects identification. The concept of aerial archeology is relation between the presence of the archaeological site in the particular localization, and the phenomena that in the same place can be observed on the terrain surface form airborne platform. One of the most appreciated, moreover, extremely precise, methods of such measurements is airborne laser scanning. In research airborne laser scanning point cloud with a density of 5 points/sq. m was used. Additionally unmanned aerial vehicle imagery data was acquired. Test area is located in central Europe. The preliminary verification of potentially microstructures localization was the creation of digital terrain and surface models. These models gave an information about the differences in elevation, as well as regular shapes and sizes that can be related to the former settlement/sub-surface feature. The paper presents the results of the detection of potentially sub-surface microstructure fields in the forestry area.

  15. EFFECTIVE DETECTION OF SUB-SURFACE ARCHEOLOGICAL FEATURES FROM LASER SCANNING POINT CLOUDS AND IMAGERY DATA

    Directory of Open Access Journals (Sweden)

    A. Fryskowska

    2017-08-01

    Full Text Available The archaeological heritage is non-renewable, and any invasive research or other actions leading to the intervention of mechanical or chemical into the ground lead to the destruction of the archaeological site in whole or in part. For this reason, modern archeology is looking for alternative methods of non-destructive and non-invasive methods of new objects identification. The concept of aerial archeology is relation between the presence of the archaeological site in the particular localization, and the phenomena that in the same place can be observed on the terrain surface form airborne platform. One of the most appreciated, moreover, extremely precise, methods of such measurements is airborne laser scanning. In research airborne laser scanning point cloud with a density of 5 points/sq. m was used. Additionally unmanned aerial vehicle imagery data was acquired. Test area is located in central Europe. The preliminary verification of potentially microstructures localization was the creation of digital terrain and surface models. These models gave an information about the differences in elevation, as well as regular shapes and sizes that can be related to the former settlement/sub-surface feature. The paper presents the results of the detection of potentially sub-surface microstructure fields in the forestry area.

  16. A Feature-Free 30-Disease Pathological Brain Detection System by Linear Regression Classifier.

    Science.gov (United States)

    Chen, Yi; Shao, Ying; Yan, Jie; Yuan, Ti-Fei; Qu, Yanwen; Lee, Elizabeth; Wang, Shuihua

    2017-01-01

    Alzheimer's disease patients are increasing rapidly every year. Scholars tend to use computer vision methods to develop automatic diagnosis system. (Background) In 2015, Gorji et al. proposed a novel method using pseudo Zernike moment. They tested four classifiers: learning vector quantization neural network, pattern recognition neural network trained by Levenberg-Marquardt, by resilient backpropagation, and by scaled conjugate gradient. This study presents an improved method by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Therefore, it can be used to detect Alzheimer's disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. FIRST SIMULTANEOUS DETECTION OF MOVING MAGNETIC FEATURES IN PHOTOSPHERIC INTENSITY AND MAGNETIC FIELD DATA

    International Nuclear Information System (INIS)

    Lim, Eun-Kyung; Yurchyshyn, Vasyl; Goode, Philip

    2012-01-01

    The formation and the temporal evolution of a bipolar moving magnetic feature (MMF) was studied with high-spatial and temporal resolution. The photometric properties were observed with the New Solar Telescope at Big Bear Solar Observatory using a broadband TiO filter (705.7 nm), while the magnetic field was analyzed using the spectropolarimetric data obtained by Hinode. For the first time, we observed a bipolar MMF simultaneously in intensity images and magnetic field data, and studied the details of its structure. The vector magnetic field and the Doppler velocity of the MMF were also studied. A bipolar MMF with its positive polarity closer to the negative penumbra formed, accompanied by a bright, filamentary structure in the TiO data connecting the MMF and a dark penumbral filament. A fast downflow (≤2 km s –1 ) was detected at the positive polarity. The vector magnetic field obtained from the full Stokes inversion revealed that a bipolar MMF has a U-shaped magnetic field configuration. Our observations provide a clear intensity counterpart of the observed MMF in the photosphere, and strong evidence of the connection between the MMF and the penumbral filament as a serpentine field.

  18. A new method for weakening the combined effect of residual errors on multibeam bathymetric data

    Science.gov (United States)

    Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue

    2014-12-01

    Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.

  19. Feature selection for anomaly–based network intrusion detection using cluster validity indices

    CSIR Research Space (South Africa)

    Naidoo, Tyrone

    2015-09-01

    Full Text Available data, which is rarely available in operational networks. It uses normalized cluster validity indices as an objective function that is optimized over the search space of candidate feature subsets via a genetic algorithm. Feature sets produced...

  20. Change detection and change monitoring of natural and man-made features in multispectral and hyperspectral satellite imagery

    Science.gov (United States)

    Moody, Daniela Irina

    2018-04-17

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. A Hebbian learning rule may be used to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of pixel patches over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.

  1. Salient region detection by fusing bottom-up and top-down features extracted from a single image.

    Science.gov (United States)

    Tian, Huawei; Fang, Yuming; Zhao, Yao; Lin, Weisi; Ni, Rongrong; Zhu, Zhenfeng

    2014-10-01

    Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.

  2. Surveying alignment-free features for Ortholog detection in related yeast proteomes by using supervised big data classifiers.

    Science.gov (United States)

    Galpert, Deborah; Fernández, Alberto; Herrera, Francisco; Antunes, Agostinho; Molina-Ruiz, Reinaldo; Agüero-Chapin, Guillermin

    2018-05-03

    The development of new ortholog detection algorithms and the improvement of existing ones are of major importance in functional genomics. We have previously introduced a successful supervised pairwise ortholog classification approach implemented in a big data platform that considered several pairwise protein features and the low ortholog pair ratios found between two annotated proteomes (Galpert, D et al., BioMed Research International, 2015). The supervised models were built and tested using a Saccharomycete yeast benchmark dataset proposed by Salichos and Rokas (2011). Despite several pairwise protein features being combined in a supervised big data approach; they all, to some extent were alignment-based features and the proposed algorithms were evaluated on a unique test set. Here, we aim to evaluate the impact of alignment-free features on the performance of supervised models implemented in the Spark big data platform for pairwise ortholog detection in several related yeast proteomes. The Spark Random Forest and Decision Trees with oversampling and undersampling techniques, and built with only alignment-based similarity measures or combined with several alignment-free pairwise protein features showed the highest classification performance for ortholog detection in three yeast proteome pairs. Although such supervised approaches outperformed traditional methods, there were no significant differences between the exclusive use of alignment-based similarity measures and their combination with alignment-free features, even within the twilight zone of the studied proteomes. Just when alignment-based and alignment-free features were combined in Spark Decision Trees with imbalance management, a higher success rate (98.71%) within the twilight zone could be achieved for a yeast proteome pair that underwent a whole genome duplication. The feature selection study showed that alignment-based features were top-ranked for the best classifiers while the runners-up were

  3. Acoustic Event Detection in Multichannel Audio Using Gated Recurrent Neural Networks with High‐Resolution Spectral Features

    Directory of Open Access Journals (Sweden)

    Hyoung‐Gook Kim

    2017-12-01

    Full Text Available Recently, deep recurrent neural networks have achieved great success in various machine learning tasks, and have also been applied for sound event detection. The detection of temporally overlapping sound events in realistic environments is much more challenging than in monophonic detection problems. In this paper, we present an approach to improve the accuracy of polyphonic sound event detection in multichannel audio based on gated recurrent neural networks in combination with auditory spectral features. In the proposed method, human hearing perception‐based spatial and spectral‐domain noise‐reduced harmonic features are extracted from multichannel audio and used as high‐resolution spectral inputs to train gated recurrent neural networks. This provides a fast and stable convergence rate compared to long short‐term memory recurrent neural networks. Our evaluation reveals that the proposed method outperforms the conventional approaches.

  4. A Detection of the Baryon Acoustic Oscillation Features in the SDSS BOSS DR12 Galaxy Bispectrum

    Science.gov (United States)

    Pearson, David W.; Samushia, Lado

    2018-05-01

    We present the first high significance detection (4.1σ) of the Baryon Acoustic Oscillations (BAO) feature in the galaxy bispectrum of the twelfth data release (DR12) of the Baryon Oscillation Spectroscopic Survey (BOSS) CMASS sample (0.43 ≤ z ≤ 0.7). We measured the scale dilation parameter, α, using the power spectrum, bispectrum, and both simultaneously for DR12, plus 2048 MultiDark-PATCHY mocks in the North and South Galactic Caps (NGC and SGC, respectively), and the volume weighted averages of those two samples (N+SGC). The fitting to the mocks validated our analysis pipeline, yielding values consistent with the mock cosmology. By fitting to the power spectrum and bispectrum separately, we tested the robustness of our results, finding consistent values from the NGC, SGC and N+SGC in all cases. We found DV = 2032 ± 24(stat.) ± 15(sys.) Mpc, DV = 2038 ± 55(stat.) ± 15(sys.) Mpc, and DV = 2031 ± 22(stat.) ± 10(sys.) Mpc from the N+SGC power spectrum, bispectrum and simultaneous fitting, respectively. Our bispectrum measurement precision was mainly limited by the size of the covariance matrix. Based on the fits to the mocks, we showed that if a less noisy estimator of the covariance were available, from either a theoretical computation or a larger suite of mocks, the constraints from the bispectrum and simultaneous fits would improve to 1.1 per cent (1.3 per cent with systematics) and 0.7 per cent (0.9 per cent with systematics), respectively, with the latter being slightly more precise than the power spectrum only constraints from the reconstructed field.

  5. Automated Detection of Geomorphic Features in LiDAR Point Clouds of Various Spatial Density

    Science.gov (United States)

    Dorninger, Peter; Székely, Balázs; Zámolyi, András.; Nothegger, Clemens

    2010-05-01

    considerably varying considerably because of the various base points that were needed to cover the whole landslide. The resulting point spacing is approximately 20 cm. The achievable accuracy was about 10 cm. The airborne data was acquired with mean point densities of 2 points per square-meter. The accuracy of this dataset was about 15 cm. The second testing site is an area of the Leithagebirge in Burgenland, Austria. The data was acquired by an airborne Riegl LMS-Q560 laser scanner mounted on a helicopter. The mean point density was 6-8 points per square with an accuracy better than 10 cm. We applied our processing chain on the datasets individually. First, they were transformed to local reference frames and fine adjustments of the individual scans respectively flight strips were applied. Subsequently, the local regression planes were determined for each point of the point clouds and planar features were extracted by means of the proposed approach. It turned out that even small displacements can be detected if the number of points used for the fit is enough to define a parallel but somewhat displaced plane. Smaller cracks and erosional incisions do not disturb the plane fitting, because mostly they are filtered out as outliers. A comparison of the different campaigns of the Doren site showed exciting matches of the detected geomorphic structures. Although the geomorphic structure of the Leithagebirge differs from the Doren landslide, and the scales of the two studies were also different, reliable results were achieved in both cases. Additionally, the approach turned out to be highly robust against points which were not located on the terrain. Hence, no false positives were determined within the dense vegetation above the terrain, while it was possible to cover the investigated areas completely with reliable planes. In some cases, however, some structures in the tree crowns were also recognized, but these small patches could be very well sorted out from the geomorphically

  6. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters

    Science.gov (United States)

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-01-01

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments’ performance and survey accuracy. PMID:26729117

  7. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters.

    Science.gov (United States)

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-12-29

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments' performance and survey accuracy.

  8. Extracting Information from Conventional AE Features for Fatigue Onset Damage Detection in Carbon Fiber Composites

    DEFF Research Database (Denmark)

    Unnthorsson, Runar; Pontoppidan, Niels Henrik Bohl; Jonsson, Magnus Thor

    2005-01-01

    We have analyzed simple data fusion and preprocessing methods on Acoustic Emission measurements of prosthetic feet made of carbon fiber reinforced composites. This paper presents the initial research steps; aiming at reducing the time spent on the fatigue test. With a simple single feature...... approaches can readily be investigated using the improved features, possibly improving the performance using multiple feature classifiers, e.g., Voting systems; Support Vector Machines and Gaussian Mixtures....

  9. Building an intrusion detection system using a filter-based feature selection algorithm

    NARCIS (Netherlands)

    Ambusaidi, Mohammed A.; He, Xiangjian; Nanda, Priyadarsi; Tan, Zhiyuan

    2016-01-01

    Redundant and irrelevant features in data have caused a long-term problem in network traffic classification. These features not only slow down the process of classification but also prevent a classifier from making accurate decisions, especially when coping with big data. In this paper, we propose a

  10. Testing of Haar-Like Feature in Region of Interest Detection for Automated Target Recognition (ATR) System

    Science.gov (United States)

    Zhang, Yuhan; Lu, Dr. Thomas

    2010-01-01

    The objectives of this project were to develop a ROI (Region of Interest) detector using Haar-like feature similar to the face detection in Intel's OpenCV library, implement it in Matlab code, and test the performance of the new ROI detector against the existing ROI detector that uses Optimal Trade-off Maximum Average Correlation Height filter (OTMACH). The ROI detector included 3 parts: 1, Automated Haar-like feature selection in finding a small set of the most relevant Haar-like features for detecting ROIs that contained a target. 2, Having the small set of Haar-like features from the last step, a neural network needed to be trained to recognize ROIs with targets by taking the Haar-like features as inputs. 3, using the trained neural network from the last step, a filtering method needed to be developed to process the neural network responses into a small set of regions of interests. This needed to be coded in Matlab. All the 3 parts needed to be coded in Matlab. The parameters in the detector needed to be trained by machine learning and tested with specific datasets. Since OpenCV library and Haar-like feature were not available in Matlab, the Haar-like feature calculation needed to be implemented in Matlab. The codes for Adaptive Boosting and max/min filters in Matlab could to be found from the Internet but needed to be integrated to serve the purpose of this project. The performance of the new detector was tested by comparing the accuracy and the speed of the new detector against the existing OTMACH detector. The speed was referred as the average speed to find the regions of interests in an image. The accuracy was measured by the number of false positives (false alarms) at the same detection rate between the two detectors.

  11. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    Science.gov (United States)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  12. Object-Based Change Detection in Urban Areas from High Spatial Resolution Images Based on Multiple Features and Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2018-02-01

    Full Text Available To improve the accuracy of change detection in urban areas using bi-temporal high-resolution remote sensing images, a novel object-based change detection scheme combining multiple features and ensemble learning is proposed in this paper. Image segmentation is conducted to determine the objects in bi-temporal images separately. Subsequently, three kinds of object features, i.e., spectral, shape and texture, are extracted. Using the image differencing process, a difference image is generated and used as the input for nonlinear supervised classifiers, including k-nearest neighbor, support vector machine, extreme learning machine and random forest. Finally, the results of multiple classifiers are integrated using an ensemble rule called weighted voting to generate the final change detection result. Experimental results of two pairs of real high-resolution remote sensing datasets demonstrate that the proposed approach outperforms the traditional methods in terms of overall accuracy and generates change detection maps with a higher number of homogeneous regions in urban areas. Moreover, the influences of segmentation scale and the feature selection strategy on the change detection performance are also analyzed and discussed.

  13. Computer-aided detection of renal calculi from noncontrast CT images using TV-flow and MSER features

    Science.gov (United States)

    Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B.; Linguraru, Marius George; Yao, Jianhua; Summers, Ronald M.

    2015-01-01

    Purpose: Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. Methods: The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients and compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. Results: At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e − 3) on all calculi from 1 to 433 mm3 in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Conclusions: Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis. PMID:25563255

  14. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy.

    Science.gov (United States)

    Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-07-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Feature-based Detection and Discrimination at DuPont's Lake Success Business Park, Connecticut

    National Research Council Canada - National Science Library

    Keiswetter, Dean A

    2007-01-01

    The objective of this demonstration was to determine if laser-positioned, high-density EM61 data acquired in a moving survey mode could support feature-based discrimination decisions for a canopied...

  16. Better feature acquisition through the use of infrared imaging for human detection systems

    CSIR Research Space (South Africa)

    Kunene, Dumisani C

    2017-09-01

    Full Text Available are used for training the classifiers with infrared samples. The conventional use of support vector machines (SVM) on HOG features is tested against extreme learning machines (ELM) and convolutional neural networks (CNN). The results obtained show...

  17. An Energy efficient application specific integrated circuit for electrocardiogram feature detection and its potential for ambulatory cardiovascular disease detection.

    Science.gov (United States)

    Jain, Sanjeev Kumar; Bhaumik, Basabi

    2016-03-01

    A novel algorithm based on forward search is developed for real-time electrocardiogram (ECG) signal processing and implemented in application specific integrated circuit (ASIC) for QRS complex related cardiovascular disease diagnosis. The authors have evaluated their algorithm using MIT-BIH database and achieve sensitivity of 99.86% and specificity of 99.93% for QRS complex peak detection. In this Letter, Physionet PTB diagnostic ECG database is used for QRS complex related disease detection. An ASIC for cardiovascular disease detection is fabricated using 130-nm CMOS high-speed process technology. The area of the ASIC is 0.5 mm(2). The power dissipation is 1.73 μW at the operating frequency of 1 kHz with a supply voltage of 0.6 V. The output from the ASIC is fed to their Android application that generates diagnostic report and can be sent to a cardiologist through email. Their ASIC result shows average failed detection rate of 0.16% for six leads data of 290 patients in PTB diagnostic ECG database. They also have implemented a low-leakage version of their ASIC. The ASIC dissipates only 45 pJ with a supply voltage of 0.9 V. Their proposed ASIC is most suitable for energy efficient telemetry cardiovascular disease detection system.

  18. Comparison of spatial frequency domain features for the detection of side attack explosive ballistics in synthetic aperture acoustics

    Science.gov (United States)

    Dowdy, Josh; Anderson, Derek T.; Luke, Robert H.; Ball, John E.; Keller, James M.; Havens, Timothy C.

    2016-05-01

    Explosive hazards in current and former conflict zones are a threat to both military and civilian personnel. As a result, much effort has been dedicated to identifying automated algorithms and systems to detect these threats. However, robust detection is complicated due to factors like the varied composition and anatomy of such hazards. In order to solve this challenge, a number of platforms (vehicle-based, handheld, etc.) and sensors (infrared, ground penetrating radar, acoustics, etc.) are being explored. In this article, we investigate the detection of side attack explosive ballistics via a vehicle-mounted acoustic sensor. In particular, we explore three acoustic features, one in the time domain and two on synthetic aperture acoustic (SAA) beamformed imagery. The idea is to exploit the varying acoustic frequency profile of a target due to its unique geometry and material composition with respect to different viewing angles. The first two features build their angle specific frequency information using a highly constrained subset of the signal data and the last feature builds its frequency profile using all available signal data for a given region of interest (centered on the candidate target location). Performance is assessed in the context of receiver operating characteristic (ROC) curves on cross-validation experiments for data collected at a U.S. Army test site on different days with multiple target types and clutter. Our preliminary results are encouraging and indicate that the top performing feature is the unrolled two dimensional discrete Fourier transform (DFT) of SAA beamformed imagery.

  19. Support Vector Feature Selection for Early Detection of Anastomosis Leakage From Bag-of-Words in Electronic Health Records.

    Science.gov (United States)

    Soguero-Ruiz, Cristina; Hindberg, Kristian; Rojo-Alvarez, Jose Luis; Skrovseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert

    2016-09-01

    The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.

  20. Computer aided detection of suspicious regions on digital mammograms : rapid segmentation and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Ruggiero, C; Giacomini, M; Sacile, R [DIST - Department of Communication Computer and System Sciences, University of Genova, Via Opera Pia 13, 16145 Genova (Italy); Rosselli Del Turco, M [Centro per lo studio e la prevenzione oncologica, Firenze (Italy)

    1999-12-31

    A method is presented for rapid detection of suspicious regions which consists of two steps. The first step is segmentation based on texture analysis consisting of : histogram equalization, Laws filtering for texture analysis, Gaussian blur and median filtering to enhance differences between tissues in different respects, histogram thresholding to obtain a binary image, logical masking in order to detect regions to be discarded from the analysis, edge detection. This method has been tested on 60 images, obtaining 93% successful detection of suspicious regions. (authors) 4 refs, 9 figs, 1 tabs.

  1. Non-rigid registration of 3D ultrasound for neurosurgery using automatic feature detection and matching.

    Science.gov (United States)

    Machado, Inês; Toews, Matthew; Luo, Jie; Unadkat, Prashin; Essayed, Walid; George, Elizabeth; Teodoro, Pedro; Carvalho, Herculano; Martins, Jorge; Golland, Polina; Pieper, Steve; Frisken, Sarah; Golby, Alexandra; Wells, William

    2018-06-04

    The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.

  2. More than a century of bathymetric observations and present-day shallow sediment characterization in Belfast Bay, Maine, USA: implications for pockmark field longevity

    Science.gov (United States)

    Brothers, Laura L.; Kelley, Joseph T.; Belknap, Daniel F.; Barnhardt, Walter A.; Andrews, Brian D.; Maynard, Melissa Landon

    2011-08-01

    Mechanisms and timescales responsible for pockmark formation and maintenance remain uncertain, especially in areas lacking extensive thermogenic fluid deposits (e.g., previously glaciated estuaries). This study characterizes seafloor activity in the Belfast Bay, Maine nearshore pockmark field using (1) three swath bathymetry datasets collected between 1999 and 2008, complemented by analyses of shallow box-core samples for radionuclide activity and undrained shear strength, and (2) historical bathymetric data (report and smooth sheets from 1872, 1947, 1948). In addition, because repeat swath bathymetry surveys are an emerging data source, we present a selected literature review of recent studies using such datasets for seafloor change analysis. This study is the first to apply the method to a pockmark field, and characterizes macro-scale (>5 m) evolution of tens of square kilometers of highly irregular seafloor. Presence/absence analysis yielded no change in pockmark frequency or distribution over a 9-year period (1999-2008). In that time pockmarks did not detectably enlarge, truncate, elongate, or combine. Historical data indicate that pockmark chains already existed in the 19th century. Despite the lack of macroscopic changes in the field, near-bed undrained shear-strength values of less than 7 kPa and scattered downcore 137Cs signatures indicate a highly disturbed setting. Integrating these findings with independent geophysical and geochemical observations made in the pockmark field, it can be concluded that (1) large-scale sediment resuspension and dispersion related to pockmark formation and failure do not occur frequently within this field, and (2) pockmarks can persevere in a dynamic estuarine setting that exhibits minimal modern fluid venting. Although pockmarks are conventionally thought to be long-lived features maintained by a combination of fluid venting and minimal sediment accumulation, this suggests that other mechanisms may be equally active in

  3. Bathymetric terrain model of the Atlantic margin for marine geological investigations

    Science.gov (United States)

    Andrews, Brian D.; Chaytor, Jason D.; ten Brink, Uri S.; Brothers, Daniel S.; Gardner, James V.; Lobecker, Elizabeth A.; Calder, Brian R.

    2016-01-01

    A bathymetric terrain model of the Atlantic margin covering almost 725,000 square kilometers of seafloor from the New England Seamounts in the north to the Blake Basin in the south is compiled from existing multibeam bathymetric data for marine geological investigations. Although other terrain models of the same area are extant, they are produced from either satellite-derived bathymetry at coarse resolution (ETOPO1), or use older bathymetric data collected by using a combination of single beam and multibeam sonars (Coastal Relief Model). The new multibeam data used to produce this terrain model have been edited by using hydrographic data processing software to maximize the quality, usability, and cartographic presentation of the combined 100-meter resolution grid. The final grid provides the largest high-resolution, seamless terrain model of the Atlantic margin..

  4. Less is more: Avoiding the LIBS dimensionality curse through judicious feature selection for explosive detection

    Science.gov (United States)

    Kumar Myakalwar, Ashwin; Spegazzini, Nicolas; Zhang, Chi; Kumar Anubham, Siva; Dasari, Ramachandra R.; Barman, Ishan; Kumar Gundawar, Manoj

    2015-01-01

    Despite its intrinsic advantages, translation of laser induced breakdown spectroscopy for material identification has been often impeded by the lack of robustness of developed classification models, often due to the presence of spurious correlations. While a number of classifiers exhibiting high discriminatory power have been reported, efforts in establishing the subset of relevant spectral features that enable a fundamental interpretation of the segmentation capability and avoid the ‘curse of dimensionality’ have been lacking. Using LIBS data acquired from a set of secondary explosives, we investigate judicious feature selection approaches and architect two different chemometrics classifiers –based on feature selection through prerequisite knowledge of the sample composition and genetic algorithm, respectively. While the full spectral input results in classification rate of ca.92%, selection of only carbon to hydrogen spectral window results in near identical performance. Importantly, the genetic algorithm-derived classifier shows a statistically significant improvement to ca. 94% accuracy for prospective classification, even though the number of features used is an order of magnitude smaller. Our findings demonstrate the impact of rigorous feature selection in LIBS and also hint at the feasibility of using a discrete filter based detector thereby enabling a cheaper and compact system more amenable to field operations. PMID:26286630

  5. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  6. Application of IRS-1D data in water erosion features detection (case study: Nour roud catchment, Iran).

    Science.gov (United States)

    Solaimani, K; Amri, M A Hadian

    2008-08-01

    The aim of this study was capability of Indian Remote Sensing (IRS) data of 1D to detecting erosion features which were created from run-off. In this study, ability of PAN digital data of IRS-1D satellite was evaluated for extraction of erosion features in Nour-roud catchment located in Mazandaran province, Iran, using GIS techniques. Research method has based on supervised digital classification, using MLC algorithm and also visual interpretation, using PMU analysis and then these were evaluated and compared. Results indicated that opposite of digital classification, with overall accuracy 40.02% and kappa coefficient 31.35%, due to low spectral resolution; visual interpretation and classification, due to high spatial resolution (5.8 m), prepared classifying erosion features from this data, so that these features corresponded with the lithology, slope and hydrograph lines using GIS, so closely that one can consider their boundaries overlapped. Also field control showed that this data is relatively fit for using this method in investigation of erosion features and specially, can be applied to identify large erosion features.

  7. Real-Time Detection and Measurement of Eye Features from Color Images

    Directory of Open Access Journals (Sweden)

    Diana Borza

    2016-07-01

    Full Text Available The accurate extraction and measurement of eye features is crucial to a variety of domains, including human-computer interaction, biometry, and medical research. This paper presents a fast and accurate method for extracting multiple features around the eyes: the center of the pupil, the iris radius, and the external shape of the eye. These features are extracted using a multistage algorithm. On the first stage the pupil center is localized using a fast circular symmetry detector and the iris radius is computed using radial gradient projections, and on the second stage the external shape of the eye (of the eyelids is determined through a Monte Carlo sampling framework based on both color and shape information. Extensive experiments performed on a different dataset demonstrate the effectiveness of our approach. In addition, this work provides eye annotation data for a publicly-available database.

  8. Bathymetric surveys at highway bridges crossing the Missouri River in Kansas City, Missouri, using a multibeam echo sounder, 2010

    Science.gov (United States)

    Huizinga, Richard J.

    2010-01-01

    Bathymetric surveys were conducted by the U.S. Geological Survey, in cooperation with the Missouri Department of Transportation, on the Missouri River in the vicinity of nine bridges at seven highway crossings in Kansas City, Missouri, in March 2010. A multibeam echo sounder mapping system was used to obtain channel-bed elevations for river reaches that ranged from 1,640 to 1,800 feet long and extending from bank to bank in the main channel of the Missouri River. These bathymetric scans will be used by the Missouri Department of Transportation to assess the condition of the bridges for stability and integrity with respect to bridge scour. Bathymetric data were collected around every pier that was in water, except those at the edge of the water or in extremely shallow water, and one pier that was surrounded by a large debris raft. A scour hole was present at every pier for which bathymetric data could be obtained. The scour hole at a given pier varied in depth relative to the upstream channel bed, depending on the presence and proximity of other piers or structures upstream from the pier in question. The surveyed channel bed at the bottom of the scour hole was between 5 and 50 feet above bedrock. At bridges with drilled shaft foundations, generally there was exposure of the upstream end of the seal course and the seal course often was undermined to some extent. At one site, the minimum elevation of the scour hole at the main channel pier was about 10 feet below the bottom of the seal course, and the sides of the drilled shafts were evident in a point cloud visualization of the data at that pier. However, drilled shafts generally penetrated 20 feet into bedrock. Undermining of the seal course was evident as a sonic 'shadow' in the point cloud visualization of several of the piers. Large dune features were present in the channel at nearly all of the surveyed sites, as were numerous smaller dunes and many ripples. Several of the sites are on or near bends in the river

  9. Fast region-based object detection and tracking using correlation of features

    CSIR Research Space (South Africa)

    Senekal, F

    2010-11-01

    Full Text Available and track a target object (or objects) over a series of digital images. Visual target tracking can be accomplished by feature-based or region-based approaches. In feature-based approaches, interest points are calculated in a digital image, and a local...-time performance based on the computational power that is available on a specific platform. To further reduce the computational requirements, process- ing is restricted to the region of interest (ROI). The region of interest is provided as an input parameter...

  10. Detecting Structural Features in Metallic Glass via Synchrotron Radiation Experiments Combined with Simulations

    Directory of Open Access Journals (Sweden)

    Gu-Qing Guo

    2015-11-01

    Full Text Available Revealing the essential structural features of metallic glasses (MGs will enhance the understanding of glass-forming mechanisms. In this work, a feasible scheme is provided where we performed the state-of-the-art synchrotron-radiation based experiments combined with simulations to investigate the microstructures of ZrCu amorphous compositions. It is revealed that in order to stabilize the amorphous state and optimize the topological and chemical distribution, besides the icosahedral or icosahedral-like clusters, other types of clusters also participate in the formation of the microstructure in MGs. This cluster-level co-existing feature may be popular in this class of glassy materials.

  11. Innovative R.E.A. tools for integrated bathymetric survey

    Science.gov (United States)

    Demarte, Maurizio; Ivaldi, Roberta; Sinapi, Luigi; Bruzzone, Gabriele; Caccia, Massimo; Odetti, Angelo; Fontanelli, Giacomo; Masini, Andrea; Simeone, Emilio

    2017-04-01

    sensors useful for seabed analysis. The very stable platform located on the top of the USV allows for taking-off and landing of the RPAS. By exploiting its higher power autonomy and load capability, the USV will be used as a mothership for the RPAS. In particular, during the missions the USV will be able to furnish recharging possibility for the RPAS and it will be able to function as a bridge for the communication between the RPAS and its control station. The main advantage of the system is the remote acquisition of high-resolution bathymetric data from RPAS in areas where the possibility to have a systematic and traditional survey are few or none. These tools (USV carrying an RPAS with Hyperspectral camera) constitute an innovative and powerful system that gives to the Emergency Response Unit the right instruments to react quickly. The developing of this support could be solve the classical conflict between resolution, needed to capture the fine scale variability and coverage, needed for the large environmental phenomena, with very high variability over a wide range of spatial and temporal scales as the coastal environment.

  12. A robust segmentation approach based on analysis of features for defect detection in X-ray images of aluminium castings

    DEFF Research Database (Denmark)

    Lecomte, G.; Kaftandjian, V.; Cendre, Emmanuelle

    2007-01-01

    A robust image processing algorithm has been developed for detection of small and low contrasted defects, adapted to X-ray images of castings having a non-uniform background. The sensitivity to small defects is obtained at the expense of a high false alarm rate. We present in this paper a feature...... three parameters and taking into account the fact that X-ray grey-levels follow a statistical normal law. Results are shown on a set of 684 images, involving 59 defects, on which we obtained a 100% detection rate without any false alarm....

  13. Non-invasive detection of the freezing of gait in Parkinson's disease using spectral and wavelet features.

    Science.gov (United States)

    Nazarzadeh, Kimia; Arjunan, Sridhar P; Kumar, Dinesh K; Das, Debi Prasad

    2016-08-01

    In this study, we have analyzed the accelerometer data recorded during gait analysis of Parkinson disease patients for detecting freezing of gait (FOG) episodes. The proposed method filters the recordings for noise reduction of the leg movement changes and computes the wavelet coefficients to detect FOG events. Publicly available FOG database was used and the technique was evaluated using receiver operating characteristic (ROC) analysis. Results show a higher performance of the wavelet feature in discrimination of the FOG events from the background activity when compared with the existing technique.

  14. Possible detection of an emission feature near 584 A in the direction of G191-B2B

    Science.gov (United States)

    Green, James; Bowyer, Stuart; Jelinsky, Patrick

    1990-01-01

    A possible spectral emission feature is reported in the direction of the nearby hot white dwarf G191-B2B at 581.5 + or - 6 A with a significance of 3.8 sigma. This emission has been identified as He I 584.3 A. The emission cannot be due to local geocoronal emission or interplanetary backscatter of solar He I 584 A emission because the feature is not detected in a nearby sky exposure. Possible sources for this emission are examined, including the photosphere of G191-B2B, the comparison star G191-B2A, and a possible nebulosity near or around G191-B2B. The parameters required to explain the emission are derived for each case. All of these explanations require unexpected physical conditions; hence we believe this result must receive confirming verification despite the statistical likelihood of the detection.

  15. Possible detection of an emission feature near 584 A in the direction of G191-B2B

    International Nuclear Information System (INIS)

    Green, J.; Bowyer, S.; Jelinsky, P.

    1990-01-01

    A possible spectral emission feature is reported in the direction of the nearby hot white dwarf G191-B2B at 581.5 + or - 6 A with a significance of 3.8 sigma. This emission has been identified as He I 584.3 A. The emission cannot be due to local geocoronal emission or interplanetary backscatter of solar He I 584 A emission because the feature is not detected in a nearby sky exposure. Possible sources for this emission are examined, including the photosphere of G191-B2B, the comparison star G191-B2A, and a possible nebulosity near or around G191-B2B. The parameters required to explain the emission are derived for each case. All of these explanations require unexpected physical conditions; hence we believe this result must receive confirming verification despite the statistical likelihood of the detection. 15 refs

  16. Automated Solar Flare Detection and Feature Extraction in High-Resolution and Full-Disk Hα Images

    Science.gov (United States)

    Yang, Meng; Tian, Yu; Liu, Yangyi; Rao, Changhui

    2018-05-01

    In this article, an automated solar flare detection method applied to both full-disk and local high-resolution Hα images is proposed. An adaptive gray threshold and an area threshold are used to segment the flare region. Features of each detected flare event are extracted, e.g. the start, peak, and end time, the importance class, and the brightness class. Experimental results have verified that the proposed method can obtain more stable and accurate segmentation results than previous works on full-disk images from Big Bear Solar Observatory (BBSO) and Kanzelhöhe Observatory for Solar and Environmental Research (KSO), and satisfying segmentation results on high-resolution images from the Goode Solar Telescope (GST). Moreover, the extracted flare features correlate well with the data given by KSO. The method may be able to implement a more complicated statistical analysis of Hα solar flares.

  17. The fast detection of rare auditory feature conjunctions in the human brain as revealed by cortical gamma-band electroencephalogram.

    Science.gov (United States)

    Ruusuvirta, T; Huotilainen, M

    2005-01-01

    Natural environments typically contain temporal scatters of sounds emitted from multiple sources. The sounds may often physically stand out from one another in their conjoined rather than simple features. This poses a particular challenge for the brain to detect which of these sounds are rare and, therefore, potentially important for survival. We recorded gamma-band (32-40 Hz) electroencephalographic (EEG) oscillations from the scalp of adult humans who passively listened to a repeated tone carrying frequent and rare conjunctions of its frequency and intensity. EEG oscillations that this tone induced, rather than evoked, differed in amplitude between the two conjunction types within the 56-ms analysis window from tone onset. Our finding suggests that, perhaps with the support of its non-phase-locked synchrony in the gamma band, the human brain is able to detect rare sounds as feature conjunctions very rapidly.

  18. Geomorphological change detection using object-based feature extraction from multi-temporal LIDAR data

    NARCIS (Netherlands)

    Seijmonsbergen, A.C.; Anders, N.S.; Bouten, W.; Feitosa, R.Q.; da Costa, G.A.O.P.; de Almeida, C.M.; Fonseca, L.M.G.; Kux, H.J.H.

    2012-01-01

    Multi-temporal LiDAR DTMs are used for the development and testing of a method for geomorphological change analysis in western Austria. Our test area is located on a mountain slope in the Gargellen Valley in western Austria. Six geomorphological features were mapped by using stratified Object-Based

  19. Copy-move forgery detection utilizing Fourier-Mellin transform log-polar features

    Science.gov (United States)

    Dixit, Rahul; Naskar, Ruchira

    2018-03-01

    In this work, we address the problem of region duplication or copy-move forgery detection in digital images, along with detection of geometric transforms (rotation and rescale) and postprocessing-based attacks (noise, blur, and brightness adjustment). Detection of region duplication, following conventional techniques, becomes more challenging when an intelligent adversary brings about such additional transforms on the duplicated regions. In this work, we utilize Fourier-Mellin transform with log-polar mapping and a color-based segmentation technique using K-means clustering, which help us to achieve invariance to all the above forms of attacks in copy-move forgery detection of digital images. Our experimental results prove the efficiency of the proposed method and its superiority to the current state of the art.

  20. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    Science.gov (United States)

    Duong, Tuan A. (Inventor)

    2015-01-01

    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

  1. Detection of relationships among multi-modal brain imaging meta-features via information flow.

    Science.gov (United States)

    Miller, Robyn L; Vergara, Victor M; Calhoun, Vince D

    2018-01-15

    Neuroscientists and clinical researchers are awash in data from an ever-growing number of imaging and other bio-behavioral modalities. This flow of brain imaging data, taken under resting and various task conditions, combines with available cognitive measures, behavioral information, genetic data plus other potentially salient biomedical and environmental information to create a rich but diffuse data landscape. The conditions being studied with brain imaging data are often extremely complex and it is common for researchers to employ more than one imaging, behavioral or biological data modality (e.g., genetics) in their investigations. While the field has advanced significantly in its approach to multimodal data, the vast majority of studies still ignore joint information among two or more features or modalities. We propose an intuitive framework based on conditional probabilities for understanding information exchange between features in what we are calling a feature meta-space; that is, a space consisting of many individual featurae spaces. Features can have any dimension and can be drawn from any data source or modality. No a priori assumptions are made about the functional form (e.g., linear, polynomial, exponential) of captured inter-feature relationships. We demonstrate the framework's ability to identify relationships between disparate features of varying dimensionality by applying it to a large multi-site, multi-modal clinical dataset, balance between schizophrenia patients and controls. In our application it exposes both expected (previously observed) relationships, and novel relationships rarely considered investigated by clinical researchers. To the best of our knowledge there is not presently a comparably efficient way to capture relationships of indeterminate functional form between features of arbitrary dimension and type. We are introducing this method as an initial foray into a space that remains relatively underpopulated. The framework we propose is

  2. Ultrasound findings and histological features of ductal carcinoma in situ detected by ultrasound examination alone

    OpenAIRE

    Izumori, Ayumi; Takebe, Koji; Sato, Akira

    2009-01-01

    Background With the increasing use of high-resolution ultrasound (US) examination, many breast carcinomas that cannot be identified by mammography (MMG) alone have been detected. Many of these carcinomas are ductal carcinoma in situ (DCIS) and small-sized invasive carcinomas. Until date, DCISs have often been described as palpable masses with calcifications on MMG, but what are the characteristics of DCISs that are detectable by US alone? Methods One hundred fifty cases with DCIS that we expe...

  3. On-Line Fault Detection in Wind Turbine Transmission System using Adaptive Filter and Robust Statistical Features

    Directory of Open Access Journals (Sweden)

    Mark Frogley

    2013-01-01

    Full Text Available To reduce the maintenance cost, avoid catastrophic failure, and improve the wind transmission system reliability, online condition monitoring system is critical important. In the real applications, many rotating mechanical faults, such as bearing surface defect, gear tooth crack, chipped gear tooth and so on generate impulsive signals. When there are these types of faults developing inside rotating machinery, each time the rotating components pass over the damage point, an impact force could be generated. The impact force will cause a ringing of the support structure at the structural natural frequency. By effectively detecting those periodic impulse signals, one group of rotating machine faults could be detected and diagnosed. However, in real wind turbine operations, impulsive fault signals are usually relatively weak to the background noise and vibration signals generated from other healthy components, such as shaft, blades, gears and so on. Moreover, wind turbine transmission systems work under dynamic operating conditions. This will further increase the difficulties in fault detection and diagnostics. Therefore, developing advanced signal processing methods to enhance the impulsive signals is in great needs.In this paper, an adaptive filtering technique will be applied for enhancing the fault impulse signals-to-noise ratio in wind turbine gear transmission systems. Multiple statistical features designed to quantify the impulsive signals of the processed signal are extracted for bearing fault detection. The multiple dimensional features are then transformed into one dimensional feature. A minimum error rate classifier will be designed based on the compressed feature to identify the gear transmission system with defect. Real wind turbine vibration signals will be used to demonstrate the effectiveness of the presented methodology.

  4. EEG machine learning with Higuchi fractal dimension and Sample Entropy as features for successful detection of depression

    OpenAIRE

    Cukic, Milena; Pokrajac, David; Stokic, Miodrag; Simic, slobodan; Radivojevic, Vlada; Ljubisavljevic, Milos

    2018-01-01

    Reliable diagnosis of depressive disorder is essential for both optimal treatment and prevention of fatal outcomes. In this study, we aimed to elucidate the effectiveness of two non-linear measures, Higuchi Fractal Dimension (HFD) and Sample Entropy (SampEn), in detecting depressive disorders when applied on EEG. HFD and SampEn of EEG signals were used as features for seven machine learning algorithms including Multilayer Perceptron, Logistic Regression, Support Vector Machines with the linea...

  5. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Lars J., E-mail: Lars.grimm@duke.edu; Ghate, Sujata V.; Yoon, Sora C.; Kim, Connie [Department of Radiology, Duke University Medical Center, Box 3808, Durham, North Carolina 27710 (United States); Kuzmiak, Cherie M. [Department of Radiology, University of North Carolina School of Medicine, 2006 Old Clinic, CB No. 7510, Chapel Hill, North Carolina 27599 (United States); Mazurowski, Maciej A. [Duke University Medical Center, Box 2731 Medical Center, Durham, North Carolina 27710 (United States)

    2014-03-15

    Purpose: The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Methods: Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Results: Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502–0.739, 95% Confidence Interval: 0.543–0.680,p < 0.002). Conclusions: Patterns in detection errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees.

  6. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features

    International Nuclear Information System (INIS)

    Grimm, Lars J.; Ghate, Sujata V.; Yoon, Sora C.; Kim, Connie; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-01-01

    Purpose: The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Methods: Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Results: Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502–0.739, 95% Confidence Interval: 0.543–0.680,p < 0.002). Conclusions: Patterns in detection errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees

  7. Automated detection of heart ailments from 12-lead ECG using complex wavelet sub-band bi-spectrum features.

    Science.gov (United States)

    Tripathy, Rajesh Kumar; Dandapat, Samarendra

    2017-04-01

    The complex wavelet sub-band bi-spectrum (CWSB) features are proposed for detection and classification of myocardial infarction (MI), heart muscle disease (HMD) and bundle branch block (BBB) from 12-lead ECG. The dual tree CW transform of 12-lead ECG produces CW coefficients at different sub-bands. The higher-order CW analysis is used for evaluation of CWSB. The mean of the absolute value of CWSB, and the number of negative phase angle and the number of positive phase angle features from the phase of CWSB of 12-lead ECG are evaluated. Extreme learning machine and support vector machine (SVM) classifiers are used to evaluate the performance of CWSB features. Experimental results show that the proposed CWSB features of 12-lead ECG and the SVM classifier are successful for classification of various heart pathologies. The individual accuracy values for MI, HMD and BBB classes are obtained as 98.37, 97.39 and 96.40%, respectively, using SVM classifier and radial basis function kernel function. A comparison has also been made with existing 12-lead ECG-based cardiac disease detection techniques.

  8. Effective Dysphonia Detection Using Feature Dimension Reduction and Kernel Density Estimation for Patients with Parkinson’s Disease

    Science.gov (United States)

    Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar

    2014-01-01

    Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson’s disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher’s linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406

  9. An image-processing method to detect sub-optical features based on understanding noise in intensity measurements.

    Science.gov (United States)

    Bhatia, Tripta

    2018-02-01

    Accurate quantitative analysis of image data requires that we distinguish between fluorescence intensity (true signal) and the noise inherent to its measurements to the extent possible. We image multilamellar membrane tubes and beads that grow from defects in the fluid lamellar phase of the lipid 1,2-dioleoyl-sn-glycero-3-phosphocholine dissolved in water and water-glycerol mixtures by using fluorescence confocal polarizing microscope. We quantify image noise and determine the noise statistics. Understanding the nature of image noise also helps in optimizing image processing to detect sub-optical features, which would otherwise remain hidden. We use an image-processing technique "optimum smoothening" to improve the signal-to-noise ratio of features of interest without smearing their structural details. A high SNR renders desired positional accuracy with which it is possible to resolve features of interest with width below optical resolution. Using optimum smoothening, the smallest and the largest core diameter detected is of width [Formula: see text] and [Formula: see text] nm, respectively, discussed in this paper. The image-processing and analysis techniques and the noise modeling discussed in this paper can be used for detailed morphological analysis of features down to sub-optical length scales that are obtained by any kind of fluorescence intensity imaging in the raster mode.

  10. Contrast-enhanced ultrasound features of hepatocellular carcinoma not detected during the screening procedure.

    Science.gov (United States)

    Dong, Yi; Wang, Wen-Ping; Mao, Feng; Dietrich, Christoph

    2017-08-01

    Aim  The aim of this retrospective study is to report on the characteristics of contrast-enhanced ultrasound (CEUS) of primarily not detected hepatocellular carcinoma (HCC) during the screening procedure of patients at risk. Methods  Sixty-four patients with a finally solitary and histologically proven HCC not detected HCC during the screening procedure were retrospectively analyzed. Most of HCC lesions (90.6 %, 58/64) measured < 20 mm in diameter. All HCC lesions were not detected during the initial screening procedure but suspected using contrast-enhanced magnetic resonance imaging. The final gold standard was biopsy or surgery with histological examination. Results  On CEUS, 62/64 (96.8 %) of HCC were characterized as an obviously hyperenhanced lesion in arterial phase, and 41/64 (64.1 %) of HCC were characterized as hypoenhancing lesions in the portal venous and late phases. During the arterial phase of CEUS, 96.8 % of HCC displayed homogeneous hyperenhancement. Knowing the CEUS and magnetic resonance imaging findings, 45/64 (70.3 %) could have been detected using B-mode ultrasound (BMUS). Conclusion  BMUS as a screening procedure is generally accepted. Contrast-enhanced imaging modalities have improved detection and characterization of HCC. Homogeneous hyperenhancement during the arterial phase and mild washout are indicative for HCC in liver cirrhosis. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Novel Feature Modelling the Prediction and Detection of sEMG Muscle Fatigue towards an Automated Wearable System

    Directory of Open Access Journals (Sweden)

    Mohamed R. Al-Mulla

    2010-05-01

    Full Text Available Surface Electromyography (sEMG activity of the biceps muscle was recorded from ten subjects performing isometric contraction until fatigue. A novel feature (1D spectro_std was used to extract the feature that modeled three classes of fatigue, which enabled the prediction and detection of fatigue. Initial results of class separation were encouraging, discriminating between the three classes of fatigue, a longitudinal classification on Non-Fatigue and Transition-to-Fatigue shows 81.58% correct classification with accuracy 0.74 of correct predictions while the longitudinal classification on Transition-to-Fatigue and Fatigue showed lower average correct classification of 66.51% with a positive classification accuracy 0.73 of correct prediction. Comparison of the 1D spectro_std with other sEMG fatigue features on the same dataset show a significant improvement in classification, where results show a significant 20.58% (p < 0.01 improvement when using the 1D spectro_std to classify Non-Fatigue and Transition-to-Fatigue. In classifying Transition-to-Fatigue and Fatigue results also show a significant improvement over the other features giving 8.14% (p < 0.05 on average of all compared features.

  12. Identification of input variables for feature based artificial neural networks-saccade detection in EOG recordings.

    Science.gov (United States)

    Tigges, P; Kathmann, N; Engel, R R

    1997-07-01

    Though artificial neural networks (ANN) are excellent tools for pattern recognition problems when signal to noise ratio is low, the identification of decision relevant features for ANN input data is still a crucial issue. The experience of the ANN designer and the existing knowledge and understanding of the problem seem to be the only links for a specific construction. In the present study a backpropagation ANN based on modified raw data inputs showed encouraging results. Investigating the specific influences of prototypical input patterns on a specially designed ANN led to a new sparse and efficient input data presentation. This data coding obtained by a semiautomatic procedure combining existing expert knowledge and the internal representation structures of the raw data based ANN yielded a list of feature vectors, each representing the relevant information for saccade identification. The feature based ANN produced a reduction of the error rate of nearly 40% compared with the raw data ANN. An overall correct classification of 92% of so far unknown data was realized. The proposed method of extracting internal ANN knowledge for the production of a better input data representation is not restricted to EOG recordings, and could be used in various fields of signal analysis.

  13. Synchronous Adversarial Feature Learning for LiDAR based Loop Closure Detection

    OpenAIRE

    Yin, Peng; He, Yuqing; Xu, Lingyun; Peng, Yan; Han, Jianda; Xu, Weiliang

    2018-01-01

    Loop Closure Detection (LCD) is the essential module in the simultaneous localization and mapping (SLAM) task. In the current appearance-based SLAM methods, the visual inputs are usually affected by illumination, appearance and viewpoints changes. Comparing to the visual inputs, with the active property, light detection and ranging (LiDAR) based point-cloud inputs are invariant to the illumination and appearance changes. In this paper, we extract 3D voxel maps and 2D top view maps from LiDAR ...

  14. Development of an algorithm for heartbeats detection and classification in Holter records based on temporal and morphological features

    International Nuclear Information System (INIS)

    García, A; Romano, H; Laciar, E; Correa, R

    2011-01-01

    In this work a detection and classification algorithm for heartbeats analysis in Holter records was developed. First, a QRS complexes detector was implemented and their temporal and morphological characteristics were extracted. A vector was built with these features; this vector is the input of the classification module, based on discriminant analysis. The beats were classified in three groups: Premature Ventricular Contraction beat (PVC), Atrial Premature Contraction beat (APC) and Normal Beat (NB). These beat categories represent the most important groups of commercial Holter systems. The developed algorithms were evaluated in 76 ECG records of two validated open-access databases 'arrhythmias MIT BIH database' and M IT BIH supraventricular arrhythmias database . A total of 166343 beats were detected and analyzed, where the QRS detection algorithm provides a sensitivity of 99.69 % and a positive predictive value of 99.84 %. The classification stage gives sensitivities of 97.17% for NB, 97.67% for PCV and 92.78% for APC.

  15. Using space-time features to improve detection of forest disturbances from Landsat time series

    NARCIS (Netherlands)

    Hamunyela, E.; Reiche, J.; Verbesselt, J.; Herold, M.

    2017-01-01

    Current research on forest change monitoring using medium spatial resolution Landsat satellite data aims for accurate and timely detection of forest disturbances. However, producing forest disturbance maps that have both high spatial and temporal accuracy is still challenging because of the

  16. Pattern-based feature extraction for fault detection in quality relevant process control

    NARCIS (Netherlands)

    Peruzzo, S.; Holenderski, M.J.; Lukkien, J.J.

    2017-01-01

    Statistical quality control (SQC) applies multivariate statistics to monitor production processes over time and detect changes in their performance in terms of meeting specification limits on key product quality metrics. These limits are imposed by customers and typically assumed to be a single

  17. Visualizing the Limits of Low Vision in Detecting Natural Image Features

    NARCIS (Netherlands)

    Hogervorst, M.A.; Damme, W.J.M. van

    2008-01-01

    Purpose. The purpose of our study was to develop a tool to visualize the limitations posed by visual impairments in detecting small and low-contrast elements in natural images. This visualization tool incorporates existing models of several aspects of visual perception, such as the band-limited

  18. Automatic detection of children's engagement using non-verbal features and ordinal learning

    NARCIS (Netherlands)

    Kim, Jaebok; Truong, Khiet Phuong; Evers, Vanessa

    In collaborative play, young children can exhibit different types of engagement. Some children are engaged with other children in the play activity while others are just looking. In this study, we investigated methods to automatically detect the children's levels of engagement in play settings using

  19. Feature selection for anomaly–based network intrusion detection using cluster validity indices

    CSIR Research Space (South Africa)

    Naidoo, T

    2015-09-01

    Full Text Available for Anomaly–Based Network Intrusion Detection Using Cluster Validity Indices Tyrone Naidoo_, Jules–Raymond Tapamoy, Andre McDonald_ Modelling and Digital Science, Council for Scientific and Industrial Research, South Africa 1tnaidoo2@csir.co.za 3...

  20. Infants' Detection of Correlated Features among Social Stimuli: A Precursor to Stereotyping?

    Science.gov (United States)

    Levy, Gary D.; And Others

    This study examined the abilities of 10-month-old infants to detect correlations between objects and persons based on the characteristic of gender. A total of 32 infants were habituated to six stimuli in which a picture of a male or female face was paired with one of six objects such as a football or frying pan. Three objects were associated with…

  1. The radiological features, diagnosis and management of screen-detected lobular neoplasia of the breast: Findings from the Sloane Project.

    Science.gov (United States)

    Maxwell, Anthony J; Clements, Karen; Dodwell, David J; Evans, Andrew J; Francis, Adele; Hussain, Monuwar; Morris, Julie; Pinder, Sarah E; Sawyer, Elinor J; Thomas, Jeremy; Thompson, Alastair

    2016-06-01

    To investigate the radiological features, diagnosis and management of screen-detected lobular neoplasia (LN) of the breast. 392 women with pure LN alone were identified within the prospective UK cohort study of screen-detected non-invasive breast neoplasia (the Sloane Project). Demography, radiological features and diagnostic and therapeutic procedures were analysed. Non-pleomorphic LN (369/392) was most frequently diagnosed among women aged 50-54 and in 53.5% was at the first screen. It occurred most commonly on the left (58.0%; p = 0.003), in the upper outer quadrant and confined to one site (single quadrant or retroareolar region). No bilateral cases were found. The predominant radiological feature was microcalcification (most commonly granular) which increased in frequency with increasing breast density. Casting microcalcification as a predominant feature had a significantly higher lesion size compared to granular and punctate patterns (p = 0.034). 326/369 (88.3%) women underwent surgery, including 17 who underwent >1 operation, six who had mastectomy and six who had axillary surgery. Two patients had radiotherapy and 15 had endocrine treatment. Pleomorphic lobular carcinoma in situ (23/392) presented as granular microcalcification in 12; four women had mastectomy and six had radiotherapy. Screen-detected LN occurs in relatively young women and is predominantly non-pleomorphic and unilateral. It is typically associated with granular or punctate microcalcification in the left upper outer quadrant. Management, including surgical resection, is highly variable and requires evidence-based guideline development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Bathymetric study of the Neotectonic Naini Lake in outer Kumaun Himalaya

    Digital Repository Service at National Institute of Oceanography (India)

    Hashimi, N.H.; Pathak, M.C; Jauhari, P.; Nair, R.R.; Sharma, A.K.; Bhakuni, D.S.; Bisht, M.K.S.; Valdiya, K.S.

    The Naini Lake is a product of rotational movement on a NW-SE trending Nainital Fault, quite after the establishment of the drainage of a mature stream named Balia Nala. Detailed bathymetric study, permits division of this crescent-shaped lake...

  3. Swath bathymetric investigation of the seamounts located in the Laxmi Basin, eastern Arabian Sea

    Digital Repository Service at National Institute of Oceanography (India)

    Bhattacharya, G.C.; Murty, G.P.S.; Srinivas, K.; Chaubey, A.K.; Sudhakar, T.; Nair, R.R.

    Multibeam (hydrosweep) swath bathymetric investigations revealed the presence of a NNW trending linear seamount chain along the axial part of the Laxmi Basin in the eastern Arabian Sea, between 15~'N, 70~'15'E and 17~'20'N, 69~'E. This chain...

  4. The estimation of sea floor dynamics from bathymetric surveys of a sand wave area

    NARCIS (Netherlands)

    Dorst, Leendert; Roos, Pieter C.; Hulscher, Suzanne J.M.H.; Lindenbergh, R.C.

    2009-01-01

    The analysis of series of offshore bathymetric surveys provides insight into the morphodynamics of the sea floor. This knowledge helps to improve resurvey policies for the maintenance of port approaches and nautical charting, and to validate morphodynamic models. We propose a method for such an

  5. Multibeam bathymetric, gravity and magnetic studies over 79 degrees E fracture zone, central Indian basin

    Digital Repository Service at National Institute of Oceanography (India)

    KameshRaju, K.A.; Ramprasad, T.; Kodagali, V.N.; Nair, R.R.

    A regional scale bathymetric map has been constructed for the 79 degrees E fracture zone (FZ) in the Central Indian Basin between 10 degrees 15'S and 14 degrees 45'S lat. and 78 degrees 55'E and 79 degrees 20'E long. using the high...

  6. Automated detection of qualitative spatio-temporal features in electrocardiac activation maps.

    Science.gov (United States)

    Ironi, Liliana; Tentoni, Stefania

    2007-02-01

    This paper describes a piece of work aiming at the realization of a tool for the automated interpretation of electrocardiac maps. Such maps can capture a number of electrical conduction pathologies, such as arrhytmia, that can be missed by the analysis of traditional electrocardiograms. But, their introduction into the clinical practice is still far away as their interpretation requires skills that belongs to very few experts. Then, an automated interpretation tool would bridge the gap between the established research outcome and clinical practice with a consequent great impact on health care. Qualitative spatial reasoning can play a crucial role in the identification of spatio-temporal patterns and salient features that characterize the heart electrical activity. We adopted the spatial aggregation (SA) conceptual framework and an interplay of numerical and qualitative information to extract features from epicardial maps, and to make them available for reasoning tasks. Our focus is on epicardial activation isochrone maps as they are a synthetic representation of spatio-temporal aspects of the propagation of the electrical excitation. We provide a computational SA-based methodology to extract, from 3D epicardial data gathered over time, (1) the excitation wavefront structure, and (2) the salient features that characterize wavefront propagation and visually correspond to specific geometric objects. The proposed methodology provides a robust and efficient way to identify salient pieces of information in activation time maps. The hierarchical structure of the abstracted geometric objects, crucial in capturing the prominent information, facilitates the definition of general rules necessary to infer the correlation between pathophysiological patterns and wavefront structure and propagation.

  7. Underwater Cylindrical Object Detection Using the Spectral Features of Active Sonar Signals with Logistic Regression Models

    Directory of Open Access Journals (Sweden)

    Yoojeong Seo

    2018-01-01

    Full Text Available The issue of detecting objects bottoming on the sea floor is significant in various fields including civilian and military areas. The objective of this study is to investigate the logistic regression model to discriminate the target from the clutter and to verify the possibility of applying the model trained by the simulated data generated by the mathematical model to the real experimental data because it is not easy to obtain sufficient data in the underwater field. In the first stage of this study, when the clutter signal energy is so strong that the detection of a target is difficult, the logistic regression model is employed to distinguish the strong clutter signal and the target signal. Previous studies have found that if the clutter energy is larger, false detection occurs even for the various existing detection schemes. For this reason, the discrete Fourier transform (DFT magnitude spectrum of acoustic signals received by active sonar is applied to train the model to distinguish whether the received signal contains a target signal or not. The goodness of fit of the model is verified in terms of receiver operation characteristic (ROC, area under ROC curve (AUC, and classification table. The detection performance of the proposed model is evaluated in terms of detection rate according to target to clutter ratio (TCR. Furthermore, the real experimental data are employed to test the proposed approach. When using the experimental data to test the model, the logistic regression model is trained by the simulated data that are generated based on the mathematical model for the backscattering of the cylindrical object. The mathematical model is developed according to the size of the cylinder used in the experiment. Since the information on the experimental environment including the sound speed, the sediment type and such is not available, once simulated data are generated under various conditions, valid simulated data are selected using 70% of the

  8. Comparing whole slide digital images versus traditional glass slides in the detection of common microscopic features seen in dermatitis

    Directory of Open Access Journals (Sweden)

    Nikki S Vyas

    2016-01-01

    Full Text Available Background: The quality and limitations of digital slides are not fully known. We aimed to estimate intrapathologist discrepancy in detecting specific microscopic features on glass slides and digital slides created by scanning at ×20. Methods: Hematoxylin and eosin and periodic acid-Schiff glass slides were digitized using the Mirax Scan (Carl Zeiss Inc., Germany. Six pathologists assessed 50-71 digital slides. We recorded objective magnification, total time, and detection of the following: Mast cells; eosinophils; plasma cells; pigmented macrophages; melanin in the epidermis; fungal bodies; neutrophils; civatte bodies; parakeratosis; and sebocytes. This process was repeated using the corresponding glass slides after 3 weeks. The diagnosis was not required. Results: The mean time to assess digital slides was 176.77 s and 137.61 s for glass slides (P < 0.001, 99% confidence interval [CI]. The mean objective magnification used to detect features using digital slides was 18.28 and 14.07 for glass slides (P < 0.001, 99.99% CI. Parakeratosis, civatte bodies, pigmented macrophages, melanin in the epidermis, mast cells, eosinophils, plasma cells, and neutrophils, were identified at lower objectives on glass slides (P = 0.023-0.001, 95% CI. Average intraobserver concordance ranged from κ = 0.30 to κ = 0.78. Features with poor to fair average concordance were: Melanin in the epidermis (κ = 0.15-0.58; plasma cells (κ = 0.15-0.49; and neutrophils (κ = 0.12-0.48. Features with moderate average intrapathologist concordance were: parakeratosis (κ = 0.21-0.61; civatte bodies (κ = 0.21-0.71; pigment-laden macrophages (κ = 0.34-0.66; mast cells (κ = 0.29-0.78; and eosinophils (κ = 0.31-0.79. The average intrapathologist concordance was good for sebocytes (κ = 0.51-1.00 and fungal bodies (κ = 0.47-0.76. Conclusions: Telepathology using digital slides scanned at ×20 is sufficient for detection of histopathologic features routinely encountered in

  9. Limb/pelvis hypoplasia/aplasia with skull defect (Schinzel phocomelia): distinctive features and prenatal detection.

    Science.gov (United States)

    Olney, R S; Hoyme, H E; Roche, F; Ferguson, K; Hintz, S; Madan, A

    2001-11-01

    Schinzel phocomelia syndrome is characterized by limb/pelvis hypoplasia/aplasia: specifically, intercalary limb deficiencies and absent or hypoplastic pelvic bones. The phenotype is similar to that described in a related multiple malformation syndrome known as Al-Awadi/Raas-Rothschild syndrome. The additional important feature of large parietooccipital skull defects without meningocele, encephalocele, or other brain malformation has thus far been reported only in children with Schinzel phocomelia syndrome. We recently evaluated a boy affected with Schinzel phocomelia born to nonconsanguineous healthy parents of Mexican origin. A third-trimester fetal ultrasound scan showed severe limb deficiencies and an absent pelvis. The infant died shortly after birth. Dysmorphology examination, radiographs, and autopsy revealed quadrilateral intercalary limb deficiencies with preaxial toe polydactyly; an absent pelvis and a 7 x 3-cm skull defect; and extraskeletal anomalies including microtia, telecanthus, micropenis with cryptorchidism, renal cysts, stenosis of the colon, and a cleft alveolar ridge. A normal 46,XY karyotype was demonstrated, and autosomal recessive inheritance was presumed on the basis of previously reported families. This case report emphasizes the importance of recognizing severe pelvic and skull deficiencies (either post- or prenatally) in differentiating infants with Schinzel phocomelia from other multiple malformation syndromes that feature intercalary limb defects, including thalidomide embryopathy and Roberts-SC phocomelia. Copyright 2001 Wiley-Liss, Inc.

  10. INTERSTELLAR CARBODIIMIDE (HNCNH): A NEW ASTRONOMICAL DETECTION FROM THE GBT PRIMOS SURVEY VIA MASER EMISSION FEATURES

    International Nuclear Information System (INIS)

    McGuire, Brett A.; Loomis, Ryan A.; Charness, Cameron M.; Corby, Joanna F.; Blake, Geoffrey A.; Hollis, Jan M.; Lovas, Frank J.; Jewell, Philip R.; Remijan, Anthony J.

    2012-01-01

    In this work, we identify carbodiimide (HNCNH), which is an isomer of the well-known interstellar species cyanamide (NH 2 CN), in weak maser emission, using data from the Green Bank Telescope PRIMOS survey toward Sgr B2(N). All spectral lines observed are in emission and have energy levels in excess of 170 K, indicating that the molecule likely resides in relatively hot gas that characterizes the denser regions of this star-forming region. The anticipated abundance of this molecule from ice mantle experiments is ∼10% of the abundance of NH 2 CN, which in Sgr B2(N) corresponds to ∼2 × 10 13 cm –2 . Such an abundance results in transition intensities well below the detection limit of any current astronomical facility and, as such, HNCNH could only be detected by those transitions which are amplified by masing.

  11. High resolution bathymetric and sonar images of a ridge southeast of Terceira Island (Azores plateau)

    Science.gov (United States)

    Lourenço, N.; Miranda, J. M.; Luis, J.; Silva, I.; Goslin, J.; Ligi, M.

    2003-04-01

    The Terceira rift is a oblique ultra-slow spreading system where a transtensive regime results from differential movement between Eurasian and African plates. So far no classical ridge segmentation pattern has here been observed. The predominant morphological features are fault controlled rhombic shaped basins and volcanism related morphologies like circular seamounts and volcanic ridges. We present SIMRAD EM300 (bathymetry + backscatter) images acquired over one of these ridges located SE of Terceira Island, during the SIRENA cruise (PI J. Goslin), which complements previous TOBI mosaics performed over the same area during the AZZORRE99 cruise (PI M. Ligi). The ridge presents a NW-SE orientation, it is seismically active (a seismic crisis was documented in 1997) and corresponds to the southern branch of a V shape bathymetric feature enclosing the Terceira Island and which tip is located west of the Island near the 1998 Serreta ridge eruption site. NE of the ridge, the core of the V, corresponds to the North Hirondelle basin. All this area corresponds mainly to Brunhes magnetic epoch. The new bathymetry maps reveal a partition between tectonic processes, centred in the ridge, and volcanism present at the bottom of the North Hirondelle basin. The ridge high backscatter surface is cut by a set of sub-parallel anastomosed normal faults striking between N130º and N150º. Some faults present horse-tail terminations. Fault splays sometimes link to neighbour faults defining extensional duplexes and fault wedge basins and highs of rhombic shape. The faulting geometry suggests that a left-lateral strike slip component should be present. The top of the ridge consists on an arched demi-.horst, and it is probably a volcanic structure remnant (caldera system?), existing prior to onset of the tectonic stage in the ridge. Both ridge flanks display gullies and mass wasting fans at the base of the slope. The ridge vicinities are almost exclusively composed of a grayish homogeneous

  12. Hindcasting of decadal‐timescale estuarine bathymetric change with a tidal‐timescale model

    Science.gov (United States)

    Ganju, Neil K.; Schoellhamer, David H.; Jaffe, Bruce E.

    2009-01-01

    Hindcasting decadal-timescale bathymetric change in estuaries is prone to error due to limited data for initial conditions, boundary forcing, and calibration; computational limitations further hinder efforts. We developed and calibrated a tidal-timescale model to bathymetric change in Suisun Bay, California, over the 1867–1887 period. A general, multiple-timescale calibration ensured robustness over all timescales; two input reduction methods, the morphological hydrograph and the morphological acceleration factor, were applied at the decadal timescale. The model was calibrated to net bathymetric change in the entire basin; average error for bathymetric change over individual depth ranges was 37%. On a model cell-by-cell basis, performance for spatial amplitude correlation was poor over the majority of the domain, though spatial phase correlation was better, with 61% of the domain correctly indicated as erosional or depositional. Poor agreement was likely caused by the specification of initial bed composition, which was unknown during the 1867–1887 period. Cross-sectional bathymetric change between channels and flats, driven primarily by wind wave resuspension, was modeled with higher skill than longitudinal change, which is driven in part by gravitational circulation. The accelerated response of depth may have prevented gravitational circulation from being represented properly. As performance criteria became more stringent in a spatial sense, the error of the model increased. While these methods are useful for estimating basin-scale sedimentation changes, they may not be suitable for predicting specific locations of erosion or deposition. They do, however, provide a foundation for realistic estuarine geomorphic modeling applications.

  13. Procedural Documentation and Accuracy Assessment of Bathymetric Maps and Area/Capacity Tables for Small Reservoirs

    Science.gov (United States)

    Wilson, Gary L.; Richards, Joseph M.

    2006-01-01

    Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.

  14. A new bathymetric survey of the Suwałki Landscape Park lakes

    Directory of Open Access Journals (Sweden)

    Borowiak Dariusz

    2016-12-01

    Full Text Available The results of the latest bathymetric survey of 21 lakes in the Suwałki Landscape Park (SLP are presented here. Measurements of the underwater lake topography were carried out in the years 2012–2013 using the hydroacoustic method (sonar Lawrence 480M. In the case of four lakes (Błędne, Pogorzałek, Purwin, Wodziłki this was the first time a bathymetric survey had been performed. Field material was used to prepare bathymetric maps, which were then used for calculating the basic size and shape parameters of the lake basins. The results of the studies are shown against the nearly 90 year history of bathymetric surveying of the SLP lakes. In the light of the current measurements, the total area of the SLP lakes is over 634 hm2 and its limnic ratio is 10%. Lake water resources in the park were estimated at 143 037.1 dam3. This value corresponds to a retention index of 2257 mm. In addition, studies have shown that the previous morphometric data are not very accurate. The relative differences in the lake surface areas ranged from –14.1 to 9.1%, and in the case of volume – from –32.2 to 35.3%. The greatest differences in the volume, expressed in absolute values, were found in the largest SLP lakes: Hańcza (1716.1 dam3, Szurpiły (1282.0 dam3, Jaczno (816.4 dam3, Perty (427.1 dam3, Jegłówek (391.2 dam3 and Kojle (286.2 dam3. The smallest disparities were observed with respect to the data obtained by the IRS (Inland Fisheries Institute in Olsztyn. The IMGW (Institute of Meteorology and Water Management bathymetric measurements were affected by some significant errors, and morphometric parameters determined on their basis are only approximate.

  15. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features.

    Science.gov (United States)

    Grimm, Lars J; Ghate, Sujata V; Yoon, Sora C; Kuzmiak, Cherie M; Kim, Connie; Mazurowski, Maciej A

    2014-03-01

    The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502-0.739, 95% Confidence Interval: 0.543-0.680,p errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees.

  16. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    Science.gov (United States)

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  17. Fukunaga-Koontz feature transformation for statistical structural damage detection and hierarchical neuro-fuzzy damage localisation

    Science.gov (United States)

    Hoell, Simon; Omenzetter, Piotr

    2017-07-01

    Considering jointly damage sensitive features (DSFs) of signals recorded by multiple sensors, applying advanced transformations to these DSFs and assessing systematically their contribution to damage detectability and localisation can significantly enhance the performance of structural health monitoring systems. This philosophy is explored here for partial autocorrelation coefficients (PACCs) of acceleration responses. They are interrogated with the help of the linear discriminant analysis based on the Fukunaga-Koontz transformation using datasets of the healthy and selected reference damage states. Then, a simple but efficient fast forward selection procedure is applied to rank the DSF components with respect to statistical distance measures specialised for either damage detection or localisation. For the damage detection task, the optimal feature subsets are identified based on the statistical hypothesis testing. For damage localisation, a hierarchical neuro-fuzzy tool is developed that uses the DSF ranking to establish its own optimal architecture. The proposed approaches are evaluated experimentally on data from non-destructively simulated damage in a laboratory scale wind turbine blade. The results support our claim of being able to enhance damage detectability and localisation performance by transforming and optimally selecting DSFs. It is demonstrated that the optimally selected PACCs from multiple sensors or their Fukunaga-Koontz transformed versions can not only improve the detectability of damage via statistical hypothesis testing but also increase the accuracy of damage localisation when used as inputs into a hierarchical neuro-fuzzy network. Furthermore, the computational effort of employing these advanced soft computing models for damage localisation can be significantly reduced by using transformed DSFs.

  18. A Novel Ship Detection Method Based on Gradient and Integral Feature for Single-Polarization Synthetic Aperture Radar Imagery

    Directory of Open Access Journals (Sweden)

    Hao Shi

    2018-02-01

    Full Text Available With the rapid development of remote sensing technologies, SAR satellites like China’s Gaofen-3 satellite have more imaging modes and higher resolution. With the availability of high-resolution SAR images, automatic ship target detection has become an important topic in maritime research. In this paper, a novel ship detection method based on gradient and integral features is proposed. This method is mainly composed of three steps. First, in the preprocessing step, a filter is employed to smooth the clutters and the smoothing effect can be adaptive adjusted according to the statistics information of the sub-window. Thus, it can retain details while achieving noise suppression. Second, in the candidate area extraction, a sea-land segmentation method based on gradient enhancement is presented. The integral image method is employed to accelerate computation. Finally, in the ship target identification step, a feature extraction strategy based on Haar-like gradient information and a Radon transform is proposed. This strategy decreases the number of templates found in traditional Haar-like methods. Experiments were performed using Gaofen-3 single-polarization SAR images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. In addition, this method has the potential for on-board processing.

  19. Nonlinear Heart Rate Variability features for real-life stress detection. Case study: students under stress due to university examination.

    Science.gov (United States)

    Melillo, Paolo; Bracale, Marcello; Pecchia, Leandro

    2011-11-07

    This study investigates the variations of Heart Rate Variability (HRV) due to a real-life stressor and proposes a classifier based on nonlinear features of HRV for automatic stress detection. 42 students volunteered to participate to the study about HRV and stress. For each student, two recordings were performed: one during an on-going university examination, assumed as a real-life stressor, and one after holidays. Nonlinear analysis of HRV was performed by using Poincaré Plot, Approximate Entropy, Correlation dimension, Detrended Fluctuation Analysis, Recurrence Plot. For statistical comparison, we adopted the Wilcoxon Signed Rank test and for development of a classifier we adopted the Linear Discriminant Analysis (LDA). Almost all HRV features measuring heart rate complexity were significantly decreased in the stress session. LDA generated a simple classifier based on the two Poincaré Plot parameters and Approximate Entropy, which enables stress detection with a total classification accuracy, a sensitivity and a specificity rate of 90%, 86%, and 95% respectively. The results of the current study suggest that nonlinear HRV analysis using short term ECG recording could be effective in automatically detecting real-life stress condition, such as a university examination.

  20. BLACK HOLE ATTACK IN AODV & FRIEND FEATURES UNIQUE EXTRACTION TO DESIGN DETECTION ENGINE FOR INTRUSION DETECTION SYSTEM IN MOBILE ADHOC NETWORK

    Directory of Open Access Journals (Sweden)

    HUSAIN SHAHNAWAZ

    2012-10-01

    Full Text Available Ad-hoc network is a collection of nodes that are capable to form dynamically a temporary network without the support of any centralized fixed infrastructure. Since there is no central controller to determine the reliable & secure communication paths in Mobile Adhoc Network, each node in the ad hoc network has to rely on each other in order to forward packets, thus highly cooperative nodes are required to ensure that the initiated data transmission process does not fail. In a mobile ad hoc network (MANET where security is a crucial issue and they are forced to rely on the neighbor node, trust plays an important role that could improve the number of successful data transmission. Larger the number of trusted nodes, higher successful data communication process rates could be expected. In this paper, Black Hole attack is applied in the network, statistics are collected to design intrusion detection engine for MANET Intrusion Detection System (IDS. Feature extraction and rule inductions are applied to find out the accuracy of detection engine by using support vector machine. In this paper True Positive generated by the detection engine is very high and this is a novel approach in the area of Mobile Adhoc Intrusion detection system.

  1. Automated Detection of Glaucoma From Topographic Features of the Optic Nerve Head in Color Fundus Photographs.

    Science.gov (United States)

    Chakrabarty, Lipi; Joshi, Gopal Datt; Chakravarty, Arunava; Raman, Ganesh V; Krishnadas, S R; Sivaswamy, Jayanthi

    2016-07-01

    To describe and evaluate the performance of an automated CAD system for detection of glaucoma from color fundus photographs. Color fundus photographs of 2252 eyes from 1126 subjects were collected from 2 centers: Aravind Eye Hospital, Madurai and Coimbatore, India. The images of 1926 eyes (963 subjects) were used to train an automated image analysis-based system, which was developed to provide a decision on a given fundus image. A total of 163 subjects were clinically examined by 2 ophthalmologists independently and their diagnostic decisions were recorded. The consensus decision was defined to be the clinical reference (gold standard). Fundus images of eyes with disagreement in diagnosis were excluded from the study. The fundus images of the remaining 314 eyes (157 subjects) were presented to 4 graders and their diagnostic decisions on the same were collected. The performance of the system was evaluated on the 314 images, using the reference standard. The sensitivity and specificity of the system and 4 independent graders were determined against the clinical reference standard. The system achieved an area under receiver operating characteristic curve of 0.792 with a sensitivity of 0.716 and specificity of 0.717 at a selected threshold for the detection of glaucoma. The agreement with the clinical reference standard as determined by Cohen κ is 0.45 for the proposed system. This is comparable to that of the image-based decisions of 4 ophthalmologists. An automated system was presented for glaucoma detection from color fundus photographs. The overall evaluation results indicated that the presented system was comparable in performance to glaucoma classification by a manual grader solely based on fundus image examination.

  2. Representation of Block-Based Image Features in a Multi-Scale Framework for Built-Up Area Detection

    Directory of Open Access Journals (Sweden)

    Zhongwen Hu

    2016-02-01

    Full Text Available The accurate extraction and mapping of built-up areas play an important role in many social, economic, and environmental studies. In this paper, we propose a novel approach for built-up area detection from high spatial resolution remote sensing images, using a block-based multi-scale feature representation framework. First, an image is divided into small blocks, in which the spectral, textural, and structural features are extracted and represented using a multi-scale framework; a set of refined Harris corner points is then used to select blocks as training samples; finally, a built-up index image is obtained by minimizing the normalized spectral, textural, and structural distances to the training samples, and a built-up area map is obtained by thresholding the index image. Experiments confirm that the proposed approach is effective for high-resolution optical and synthetic aperture radar images, with different scenes and different spatial resolutions.

  3. Feature Extraction For Application of Heart Abnormalities Detection Through Iris Based on Mobile Devices

    OpenAIRE

    Entin Martiana Kusumaningtyas; Ali Ridho Barakbah; Aditya Afgan Hermawan

    2018-01-01

    As the WHO says, heart disease is the leading cause of death and examining it by current methods in hospitals is not cheap. Iridology is one of the most popular alternative ways to detect the condition of organs. Iridology is the science that enables a health practitioner or non-expert to study signs in the iris that are capable of showing abnormalities in the body, including basic genetics, toxin deposition, circulation of dams, and other weaknesses. Research on computer iridology has been d...

  4. Detecting epileptic seizure with different feature extracting strategies using robust machine learning classification techniques by applying advance parameter optimization approach.

    Science.gov (United States)

    Hussain, Lal

    2018-06-01

    Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.

  5. Semi-Automatic Detection of Indigenous Settlement Features on Hispaniola through Remote Sensing Data

    Directory of Open Access Journals (Sweden)

    Till F. Sonnemann

    2017-12-01

    Full Text Available Satellite imagery has had limited application in the analysis of pre-colonial settlement archaeology in the Caribbean; visible evidence of wooden structures perishes quickly in tropical climates. Only slight topographic modifications remain, typically associated with middens. Nonetheless, surface scatters, as well as the soil characteristics they produce, can serve as quantifiable indicators of an archaeological site, detectable by analyzing remote sensing imagery. A variety of pre-processed, very diverse data sets went through a process of image registration, with the intention to combine multispectral bands to feed two different semi-automatic direct detection algorithms: a posterior probability, and a frequentist approach. Two 5 × 5 km2 areas in the northwestern Dominican Republic with diverse environments, having sufficient imagery coverage, and a representative number of known indigenous site locations, served each for one approach. Buffers around the locations of known sites, as well as areas with no likely archaeological evidence were used as samples. The resulting maps offer quantifiable statistical outcomes of locations with similar pixel value combinations as the identified sites, indicating higher probability of archaeological evidence. These still very experimental and rather unvalidated trials, as they have not been subsequently groundtruthed, show variable potential of this method in diverse environments.

  6. Toward improved peptide feature detection in quantitative proteomics using stable isotope labeling.

    Science.gov (United States)

    Nilse, Lars; Sigloch, Florian Christoph; Biniossek, Martin L; Schilling, Oliver

    2015-08-01

    Reliable detection of peptides in LC-MS data is a key algorithmic step in the analysis of quantitative proteomics experiments. While highly abundant peptides can be detected reliably by most modern software tools, there is much less agreement on medium and low-intensity peptides in a sample. The choice of software tools can have a big impact on the quantification of proteins, especially for proteins that appear in lower concentrations. However, in many experiments, it is precisely this region of less abundant but substantially regulated proteins that holds the biggest potential for discoveries. This is particularly true for discovery proteomics in the pharmacological sector with a specific interest in key regulatory proteins. In this viewpoint article, we discuss how the development of novel software algorithms allows us to study this region of the proteome with increased confidence. Reliable results are one of many aspects to be considered when deciding on a bioinformatics software platform. Deployment into existing IT infrastructures, compatibility with other software packages, scalability, automation, flexibility, and support need to be considered and are briefly addressed in this viewpoint article. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. New technologies for the detection of natural and anthropic features in coastal areas

    International Nuclear Information System (INIS)

    Cappucci, Sergio; Del Monte, Maurizio; Paci, M.; Valentini, Emiliana

    2015-01-01

    Some results of the sub project GE.RI.N (Natural Resources Management) conducted in the Marine Protected Area of Egadi Islands (Western Sicily) are presented. Coastal and sea floor morphology has been investigated integrating different data sources and using remote sensing data acquired by the Ministry of Environment during the MAMPIRA Project. This approach allowed us to recognize the real extent and distribution of several rocky outcrops emerging from the sandy bottom, south of Favignana Island (known as I Pali ) , and the anthropogenic features generated by the effects of traps, trawling and anchor on the 'Posidonia oceanica' meadow that, within the Egadi Archipelago, is the largest in the Mediterranean Sea (www.ampisoleegadi.it). Unpublished and detailed characterization of the seafloor and assessment of human impacts are the main results of the present study, which demonstrate how remote sensing technologies have a great potential and relevant management implication for Marine Protected Areas and the preservation of emerged and submerged environment [it

  8. Incidentally Detected Kaposi Sarcoma of Adrenal Gland with Anaplastic Features in an HIV Negative Patient

    Directory of Open Access Journals (Sweden)

    Zeliha Esin Celik

    2016-01-01

    Full Text Available Kaposi sarcoma (KS, a vascular tumor caused by infection with human herpesvirus 8 (HHV8, is a systemic disease that can present with cutaneous lesions with or without visceral involvement. Very few cases of KS, most of which were associated with AIDS, have been reported in the adrenal gland. Anaplastic transformation of KS is a rare clinical presentation known as an aggressive disease with local recurrence and metastatic potential. We report here a 47-year-old HIV negative male presented with extra-adrenal symptoms and had an incidentally detected anaplastic adrenal KS exhibited aggressive clinical course. To the best of our knowledge, this is the first case of anaplastic primary adrenal KS without mucocutaneous involvement but subsequently developed other side adrenal metastases in an HIV negative patient.

  9. Tianma 65-m telescope detection of new OH maser features towards the water fountain source IRAS 18286-0959

    Science.gov (United States)

    Chen, Xi; Shen, Zhi-Qiang; Li, Xiao-Qiong; Yang, Kai; Nakashima, Jun-ichi; Wu, Ya-Jun; Zhao, Rong-Bin; Li, Juan; Wang, Jun-Zhi; Jiang, Dong-Rong; Wang, Jin-Qing; Li, Bin; Zhong, Wei-Ye; Yung, Bosco H. K.

    2017-07-01

    We report the results of the OH maser observation towards the water fountain source IRAS 18286-0959 using the newly built Shanghai Tianma 65-m Radio Telescope. We observed the three OH ground state transition lines at frequencies of 1612, 1665 and 1667 MHz. Comparing with the spectra of previous observations, we find new maser spectral components at velocity channels largely shifted from the systemic velocity: the velocity offsets of the newly found components lie in the range 20-40 km s-1 with respect to the systemic velocity. Besides maser variability, another possible interpretation for the newly detected maser features is that part of the molecular gas in the circumstellar envelope is accelerated. The acceleration is probably caused by the passage of a high-velocity molecular jet, which has been detected in previous Very Long Baseline Interferometry observations in the H2O maser line.

  10. Feature-space assessment of electrical impedance tomography coregistered with computed tomography in detecting multiple contrast targets

    International Nuclear Information System (INIS)

    Krishnan, Kalpagam; Liu, Jeff; Kohli, Kirpal

    2014-01-01

    Purpose: Fusion of electrical impedance tomography (EIT) with computed tomography (CT) can be useful as a clinical tool for providing additional physiological information about tissues, but requires suitable fusion algorithms and validation procedures. This work explores the feasibility of fusing EIT and CT images using an algorithm for coregistration. The imaging performance is validated through feature space assessment on phantom contrast targets. Methods: EIT data were acquired by scanning a phantom using a circuit, configured for injecting current through 16 electrodes, placed around the phantom. A conductivity image of the phantom was obtained from the data using electrical impedance and diffuse optical tomography reconstruction software (EIDORS). A CT image of the phantom was also acquired. The EIT and CT images were fused using a region of interest (ROI) coregistration fusion algorithm. Phantom imaging experiments were carried out on objects of different contrasts, sizes, and positions. The conductive medium of the phantoms was made of a tissue-mimicking bolus material that is routinely used in clinical radiation therapy settings. To validate the imaging performance in detecting different contrasts, the ROI of the phantom was filled with distilled water and normal saline. Spatially separated cylindrical objects of different sizes were used for validating the imaging performance in multiple target detection. Analyses of the CT, EIT and the EIT/CT phantom images were carried out based on the variations of contrast, correlation, energy, and homogeneity, using a gray level co-occurrence matrix (GLCM). A reference image of the phantom was simulated using EIDORS, and the performances of the CT and EIT imaging systems were evaluated and compared against the performance of the EIT/CT system using various feature metrics, detectability, and structural similarity index measures. Results: In detecting distilled and normal saline water in bolus medium, EIT as a stand

  11. Remote Sensing of Martian Terrain Hazards via Visually Salient Feature Detection

    Science.gov (United States)

    Al-Milli, S.; Shaukat, A.; Spiteri, C.; Gao, Y.

    2014-04-01

    The main objective of the FASTER remote sensing system is the detection of rocks on planetary surfaces by employing models that can efficiently characterise rocks in terms of semantic descriptions. The proposed technique abates some of the algorithmic limitations of existing methods with no training requirements, lower computational complexity and greater robustness towards visual tracking applications over long-distance planetary terrains. Visual saliency models inspired from biological systems help to identify important regions (such as rocks) in the visual scene. Surface rocks are therefore completely described in terms of their local or global conspicuity pop-out characteristics. These local and global pop-out cues are (but not limited to); colour, depth, orientation, curvature, size, luminance intensity, shape, topology etc. The currently applied methods follow a purely bottom-up strategy of visual attention for selection of conspicuous regions in the visual scene without any topdown control. Furthermore the choice of models used (tested and evaluated) are relatively fast among the state-of-the-art and have very low computational load. Quantitative evaluation of these state-ofthe- art models was carried out using benchmark datasets including the Surrey Space Centre Lab Testbed, Pangu generated images, RAL Space SEEKER and CNES Mars Yard datasets. The analysis indicates that models based on visually salient information in the frequency domain (SRA, SDSR, PQFT) are the best performing ones for detecting rocks in an extra-terrestrial setting. In particular the SRA model seems to be the most optimum of the lot especially that it requires the least computational time while keeping errors competitively low. The salient objects extracted using these models can then be merged with the Digital Elevation Models (DEMs) generated from the same navigation cameras in order to be fused to the navigation map thus giving a clear indication of the rock locations.

  12. A framework for automatic feature extraction from airborne light detection and ranging data

    Science.gov (United States)

    Yan, Jianhua

    Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly

  13. Visualizing histopathologic deep learning classification and anomaly detection using nonlinear feature space dimensionality reduction.

    Science.gov (United States)

    Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias

    2018-05-16

    There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.

  14. Simulation study and guidelines to generate Laser-induced Surface Acoustic Waves for human skin feature detection

    Science.gov (United States)

    Li, Tingting; Fu, Xing; Chen, Kun; Dorantes-Gonzalez, Dante J.; Li, Yanning; Wu, Sen; Hu, Xiaotang

    2015-12-01

    Despite the seriously increasing number of people contracting skin cancer every year, limited attention has been given to the investigation of human skin tissues. To this regard, Laser-induced Surface Acoustic Wave (LSAW) technology, with its accurate, non-invasive and rapid testing characteristics, has recently shown promising results in biological and biomedical tissues. In order to improve the measurement accuracy and efficiency of detecting important features in highly opaque and soft surfaces such as human skin, this paper identifies the most important parameters of a pulse laser source, as well as provides practical guidelines to recommended proper ranges to generate Surface Acoustic Waves (SAWs) for characterization purposes. Considering that melanoma is a serious type of skin cancer, we conducted a finite element simulation-based research on the generation and propagation of surface waves in human skin containing a melanoma-like feature, determine best pulse laser parameter ranges of variation, simulation mesh size and time step, working bandwidth, and minimal size of detectable melanoma.

  15. Automatic Ship Detection in Remote Sensing Images from Google Earth of Complex Scenes Based on Multiscale Rotation Dense Feature Pyramid Networks

    Directory of Open Access Journals (Sweden)

    Xue Yang

    2018-01-01

    Full Text Available Ship detection has been playing a significant role in the field of remote sensing for a long time, but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection, and the redundancy of the detection region. In order to solve these problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN which can effectively detect ships in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN, which is aimed at solving problems resulting from the narrow width of the ship. Compared with previous multiscale detectors such as Feature Pyramid Network (FPN, DFPN builds high-level semantic feature-maps for all scales by means of dense connections, through which feature propagation is enhanced and feature reuse is encouraged. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multiscale region of interest (ROI Align for the purpose of maintaining the completeness of the semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has state-of-the-art performance.

  16. Living in a digital world: features and applications of FPGA in photon detection

    Science.gov (United States)

    Arnesano, Cosimo

    signal processing in a digital fashion avoiding RF emission and it is extremely inexpensive. This development is the result of a systematic study carried on a previous design known as the FLIMBox developed as part of a thesis of another graduate student. The extensive work done in maximizing the performance of the original FLIMBox led us to develop a new hardware solution with exciting and promising results and potential that were not possible in the previous hardware realization, where the signal harmonic content was limited by the FPGA technology. The new design permits acquisition of a much larger harmonic content of the sample response when it is excited with a pulsed light source in one single measurement using the digital mixing principle that was developed in the original design. Furthermore, we used the parallel digital FD principle to perform tissue imaging through Diffuse Optical Spectroscopy (DOS) measurements. We integrated the FLIMBox in a new system that uses a supercontinuum white laser with high brightness as a single light source and photomultipliers with large detection area, both allowing a high penetration depth with extremely low power at the sample. The parallel acquisition, achieved by using the FlimBox, decreases the time required for standard serial systems that scan through all modulation frequencies. Furthermore, the all-digital acquisition avoids analog noise, removes the analog mixer of the conventional frequency domain approach, and it does not generate radio-frequencies, normally present in current analog systems. We are able to obtain a very sensitive acquisition due to the high signal to noise ratio (S/N). The successful results obtained by utilizing digital technology in photon acquisition and processing, prompted us to extend the use of FPGA to other applications, such as phosphorescence detection. Using the FPGA concept we proposed possible solutions to outstanding problems with the current technology. In this thesis I discuss new

  17. Vehicle Color Recognition with Vehicle-Color Saliency Detection and Dual-Orientational Dimensionality Reduction of CNN Deep Features

    Science.gov (United States)

    Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang

    2017-12-01

    Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.

  18. Prevalence and features of fatty liver detected by physical examination in Guangzhou

    Science.gov (United States)

    Liao, Xian-Hua; Cao, Xu; Liu, Jie; Xie, Xiao-Hua; Sun, Yan-Hong; Zhong, Bi-Hui

    2013-01-01

    AIM: To investigate the prevalence of fatty liver discovered upon physical examination of Chinese patients and determine the associated clinical characteristics. METHODS: A total of 3433 consecutive patients who received physical examinations at the Huangpu Division of the First Affiliated Hospital at Sun Yat-sen University in Guangzhou, China from June 2010 to December 2010 were retrospectively enrolled in the study. Results of biochemical tests, abdominal ultrasound, electrocardiography, and chest X-ray were collected. The diagnosis of fatty liver was made if a patient met any two of the three following ultrasonic criteria: (1) liver and kidney echo discrepancy and presence of an increased liver echogenicity (bright); (2) unclear intrahepatic duct structure; and (3) liver far field echo decay. RESULTS: The study population consisted of 2201 males and 1232 females, with a mean age of 37.4 ± 12.8 years. When all 3433 patients were considered, the overall prevalence of hyperlipidemia was 38.1%, of fatty liver was 26.0%, of increased alanine aminotransferase (ALT) and/or aspartate aminotransferase (AST) levels was 11.9%, of gallstone was 11.4%, of hyperglycemia was 7.3%, of hypertension was 7.1%, and of hyperuricemia was 6.2%. Of the 2605 patients who completed the abdominal ultrasonography exam, 677 (26.0%) were diagnosed with fatty liver and the prevalence was higher in males (32.5% vs females: 15.3%, P 50-year-old did not reach statistical significance. Only 430 of the patients diagnosed with fatty liver had complete information; among those, increased ALT and/or AST levels were detected in only 30%, with all disturbances being mild or moderate. In these 430 patients, the overall prevalence of hypertriglyceridemia was 31.4%, of mixed type hyperlipidemia was 20.9%, of hypercholesterolemia was 12.3%, of hyperglycemia was 17.6%, of hypertension was 16.0%, of hyperuricemia was 15.3%, and of gallstone was 14.4%. Again, the prevalences of hypertriglyceridemia and

  19. Application of an Autonomous/Unmanned Survey Vessel (ASV/USV in Bathymetric Measurements

    Directory of Open Access Journals (Sweden)

    Specht Cezary

    2017-09-01

    Full Text Available The accuracy of bathymetric maps, especially in the coastal zone, is very important from the point of view of safety of navigation and transport. Due to the continuous change in shape of the seabed, these maps are fast becoming outdated for precise navigation. Therefore, it is necessary to perform periodical bathymetric measurements to keep them updated on a current basis. At present, none of the institutions in Poland (maritime offices, Hydrographic Office of the Polish Navy which are responsible for implementation of this type of measurements has at their disposal a hydrographic vessel capable of carrying out measurements for shallow waters (at depths below 1 m. This results in emergence of large areas for which no measurement data have been obtained and, consequently, the maps in the coastal zones are rather unreliable.

  20. Bathymetric surveys at highway bridges crossing the Missouri and Mississippi Rivers near St. Louis, Missouri, 2010

    Science.gov (United States)

    Huizinga, Richard J.

    2011-01-01

    Bathymetric surveys were conducted by the U.S. Geological Survey, in cooperation with the Missouri Department of Transportation, on the Missouri and Mississippi Rivers in the vicinity of 12 bridges at 7 highway crossings near St. Louis, Missouri, in October 2010. A multibeam echo sounder mapping system was used to obtain channel-bed elevations for river reaches ranging from 3,280 to 4,590 feet long and extending across the active channel of the Missouri and Mississippi Rivers. These bathymetric scans provide a snapshot of the channel conditions at the time of the surveys and provide characteristics of scour holes that may be useful in the development of predictive guidelines or equations for scour holes. These data also may be used by the Missouri Department of Transportation to assess the bridges for stability and integrity issues with respect to bridge scour.

  1. Bathymetric Structure from Motion Photogrammetry: Extracting stream bathymetry from multi-view stereo photogrammetry

    Science.gov (United States)

    Dietrich, J. T.

    2016-12-01

    Stream bathymetry is a critical variable in a number of river science applications. In larger rivers, bathymetry can be measured with instruments such as sonar (single or multi-beam), bathymetric airborne LiDAR, or acoustic doppler current profilers. However, in smaller streams with depths less than 2 meters, bathymetry is one of the more difficult variables to map at high-resolution. Optical remote sensing techniques offer several potential solutions for collecting high-resolution bathymetry. In this research, I focus on direct photogrammetric measurements of bathymetry using multi-view stereo photogrammetry, specifically Structure from Motion (SfM). The main barrier to accurate bathymetric mapping with any photogrammetric technique is correcting for the refraction of light as it passes between the two different media (air and water), which causes water depths to appear shallower than they are. I propose and test an iterative approach that calculates a series of refraction correction equations for every point/camera combination in a SfM point cloud. This new method is meant to address shortcomings of other correction techniques and works within the current preferred method for SfM data collection, oblique and highly convergent photographs. The multi-camera refraction correction presented here produces bathymetric datasets with accuracies of 0.02% of the flying height and precisions of 0.1% of the flying height. This methodology, like many fluvial remote sensing methods, will only work under ideal conditions (e.g. clear water), but it provides an additional tool for collecting high-resolution bathymetric datasets for a variety of river, coastal, and estuary systems.

  2. Bathymetric survey of Carroll Creek Tributary to Lake Tuscaloosa, Tuscaloosa County, Alabama, 2010

    Science.gov (United States)

    Lee, K.G.; Kimbrow, D.R.

    2011-01-01

    The U.S. Geological Survey, in cooperation with the City of Tuscaloosa, conducted a bathymetric survey of Carroll Creek, on May 12-13, 2010. Carroll Creek is one of the major tributaries to Lake Tuscaloosa and contributes about 6 percent of the surface drainage area. A 3.5-mile reach of Carroll Creek was surveyed to prepare a current bathymetric map, determine storage capacities at specified water-surface elevations, and compare current conditions to historical cross sections. Bathymetric data were collected using a high-resolution interferometric mapping system consisting of a phase-differencing bathymetric sonar, navigation and motion-sensing system, and a data acquisition computer. To assess the accuracy of the interferometric mapping system and document depths in shallow areas of the study reach, an electronic total station was used to survey 22 cross sections spaced 50 feet apart. The data were combined and processed and a Triangulated Irregular Network (TIN) and contour map were generated. Cross sections were extracted from the TIN and compared with historical cross sections. Between 2004 and 2010, the area (cross section 1) at the confluence of Carroll Creek and the main run of LakeTuscaloosa showed little to no change in capacity area. Another area (cross section 2) showed a maximum change in elevation of 4 feet and an average change of 3 feet. At the water-surface elevation of 224 feet (National Geodetic Vertical Datum of 1929), the cross-sectional area has changed by 260 square feet for a total loss of 28 percent of cross-sectional storage area. The loss of area may be attributed to sedimentation in Carroll Creek and (or) the difference in accuracy between the two surveys.

  3. MORPHO-BATHYMETRIC PARAMETERS OF RECESS CRUCII LAKE (STÂNIŞOAREI MOUNTAINS

    Directory of Open Access Journals (Sweden)

    ALIN MIHU-PINTILIE

    2012-03-01

    Full Text Available Morpho-bathymetric parameters of recess Crucii Lake (Stânişoarei Mountains. Crucii Lake from Stânişoarei Mountains was formed in 1978 as a result of riverbed dam Cuejdel after a landslide triggered on the western slope of Muncelul Peak. The event led initially to a small accumulation of 250-300 acvatoriu m, 25-30 m wide and 4-5 m maximum depth. In the summer of 1991 following the construction of a forest road in the flysch, and amid a high humid conditions, the slide was reactivated, leading to the formation of the largest natural dam lake in Romania. It has a length of 1 km, area of 12.2 ha, maximum depth of 16 m and a water volume of ca. 907.000 m3. Morphometric and morpho-bathymetric measurements performed in the summer of 2011, with the help of the integrated 1.200 GPS of Station Leica System 1.200 surveying measurements and bathymetric measurements Valeyport Ecosounder Midas showed new values for the morpho-bathymetric parameters. Among them stands out: 13,95 ha area, perimeter 2801,1 m, maximum length of 1004,82 m, 282,6 m maximum width, maximum depth 16,45 m. To achieve the numerical model of the lake basin were more than 45.000 points bali reading, with equidistance of 0,25 m. The scale of detail work aimed to draw up a proper database to eliminate suspicions about the old analytical methods inaccuracies. At the same time was studied the evolution of the lake’s basin in the context of relatively recent geomorphological changes.

  4. Bathymetric survey and digital elevation model of Little Holland Tract, Sacramento-San Joaquin Delta, California

    Science.gov (United States)

    Snyder, Alexander G.; Lacy, Jessica R.; Stevens, Andrew W.; Carlson, Emily M.

    2016-06-10

    The U.S. Geological Survey conducted a bathymetric survey in Little Holland Tract, a flooded agricultural tract, in the northern Sacramento-San Joaquin Delta (the “Delta”) during the summer of 2015. The new bathymetric data were combined with existing data to generate a digital elevation model (DEM) at 1-meter resolution. Little Holland Tract (LHT) was historically diked off for agricultural uses and has been tidally inundated since an accidental levee breach in 1983. Shallow tidal regions such as LHT have the potential to improve habitat quality in the Delta. The DEM of LHT was developed to support ongoing studies of habitat quality in the area and to provide a baseline for evaluating future geomorphic change. The new data comprise 138,407 linear meters of real-time-kinematic (RTK) Global Positioning System (GPS) elevation data, including both bathymetric data collected from personal watercraft and topographic elevations collected on foot at low tide. A benchmark (LHT15_b1) was established for geodetic control of the survey. Data quality was evaluated both by comparing results among surveying platforms, which showed systematic offsets of 1.6 centimeters (cm) or less, and by error propagation, which yielded a mean vertical uncertainty of 6.7 cm. Based on the DEM and time-series measurements of water depth, the mean tidal prism of LHT was determined to be 2,826,000 cubic meters. The bathymetric data and DEM are available at http://dx.doi.org/10.5066/F7RX9954. 

  5. Bathymetric maps and water-quality profiles of Table Rock and North Saluda Reservoirs, Greenville County, South Carolina

    Science.gov (United States)

    Clark, Jimmy M.; Journey, Celeste A.; Nagle, Doug D.; Lanier, Timothy H.

    2014-01-01

    Lakes and reservoirs are the water-supply source for many communities. As such, water-resource managers that oversee these water supplies require monitoring of the quantity and quality of the resource. Monitoring information can be used to assess the basic conditions within the reservoir and to establish a reliable estimate of storage capacity. In April and May 2013, a global navigation satellite system receiver and fathometer were used to collect bathymetric data, and an autonomous underwater vehicle was used to collect water-quality and bathymetric data at Table Rock Reservoir and North Saluda Reservoir in Greenville County, South Carolina. These bathymetric data were used to create a bathymetric contour map and stage-area and stage-volume relation tables for each reservoir. Additionally, statistical summaries of the water-quality data were used to provide a general description of water-quality conditions in the reservoirs.

  6. NOAA TIFF Image - 4m Bathymetric Depth of Red Snapper Research Areas in the South Atlantic Bight, 2010

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains unified Bathymetric Depth GeoTiffs with 4x4 meter cell resolution describing the topography of 15 areas along the shelf edge off the South...

  7. NOAA TIFF Image - 4m Bathymetric Curvature of Red Snapper Research Areas in the South Atlantic Bight, 2010

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains unified Bathymetric Curvature GeoTiffs with 4x4 meter cell resolution describing the topography of 15 areas along the shelf edge off the South...

  8. NOAA TIFF Image - 4m Bathymetric Mean Depth of Red Snapper Research Areas in the South Atlantic Bight, 2010

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains unified Bathymetric Mean Depth GeoTiffs with 4x4 meter cell resolution describing the topography of 15 areas along the shelf edge off the South...

  9. NOAA TIFF Image - 4m Bathymetric Depth Range of Red Snapper Research Areas in the South Atlantic Bight, 2010

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains unified Bathymetric Depth Range GeoTiffs with 4x4 meter cell resolution describing the topography of 15 areas along the shelf edge off the...

  10. BATHYMETRIC STUDY OF WADI EL-RAYAN LAKES, EGYPT

    Directory of Open Access Journals (Sweden)

    Radwan Gad Elrab ABD ELLAH

    2016-12-01

    Full Text Available Bathymetry is a technique of measuring depths to determine the morphometry of water bodies. The derivation of bathymetry from the surveys is one of the basic researches of the aquatic environment, which has several practical implications to on the lake environment and it's monitoring. Wadi El-Rayan, as Ramsar site, is a very important wetland, in Egypt, as a reservoir for agricultural drainage water, fisheries and tourism. The Lakes are man-made basins in the Fayoum depression. Wadi El-Rayan Lakes are two reservoirs (upper Lake and Lower Lake, at different elevations. The Upper Lake is classified as open basin, while the Lower Lake is a closed basin, with no significant obvious water outflow. During recent decades, human impact on Wadi El-Rayan Lakes has increased due to intensification of agriculture and fish farming. Analyses of bathyemtric plans from 1996, 2010 and 2016 showed, the differences between morphometric parameters of the Upper Lake were generally small, while the Lower Lake changes are obvious at the three periods. The small fluctuate, in the features of Upper Lake is due to the water balance between the water inflow and water. The Lower Lake has faced extreme water loss through last twenty years is due to the agricultural lands and fish farms extended in the depression. The Upper Lake is rich in Lakeshores macrophyets, while decline the water plants in the Lower Lake. With low water levels, in the Lower Lake, the future continuity of the Lake system is in jeopardy

  11. An introductory analysis of digital infrared thermal imaging guided oral cancer detection using multiresolution rotation invariant texture features

    Science.gov (United States)

    Chakraborty, M.; Das Gupta, R.; Mukhopadhyay, S.; Anjum, N.; Patsa, S.; Ray, J. G.

    2017-03-01

    This manuscript presents an analytical treatment on the feasibility of multi-scale Gabor filter bank response for non-invasive oral cancer pre-screening and detection in the long infrared spectrum. Incapability of present healthcare technology to detect oral cancer in budding stage manifests in high mortality rate. The paper contributes a step towards automation in non-invasive computer-aided oral cancer detection using an amalgamation of image processing and machine intelligence paradigms. Previous works have shown the discriminative difference of facial temperature distribution between a normal subject and a patient. The proposed work, for the first time, exploits this difference further by representing the facial Region of Interest(ROI) using multiscale rotation invariant Gabor filter bank responses followed by classification using Radial Basis Function(RBF) kernelized Support Vector Machine(SVM). The proposed study reveals an initial increase in classification accuracy with incrementing image scales followed by degradation of performance; an indication that addition of more and more finer scales tend to embed noisy information instead of discriminative texture patterns. Moreover, the performance is consistently better for filter responses from profile faces compared to frontal faces.This is primarily attributed to the ineptness of Gabor kernels to analyze low spatial frequency components over a small facial surface area. On our dataset comprising of 81 malignant, 59 pre-cancerous, and 63 normal subjects, we achieve state-of-the-art accuracy of 85.16% for normal v/s precancerous and 84.72% for normal v/s malignant classification. This sets a benchmark for further investigation of multiscale feature extraction paradigms in IR spectrum for oral cancer detection.

  12. USING COMBINATION OF PLANAR AND HEIGHT FEATURES FOR DETECTING BUILT-UP AREAS FROM HIGH-RESOLUTION STEREO IMAGERY

    Directory of Open Access Journals (Sweden)

    F. Peng

    2017-09-01

    Full Text Available Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM. Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3 can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM and digital orthophoto map (DOM are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data. The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  13. Using Combination of Planar and Height Features for Detecting Built-Up Areas from High-Resolution Stereo Imagery

    Science.gov (United States)

    Peng, F.; Cai, X.; Tan, W.

    2017-09-01

    Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM). Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3) can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM) and digital orthophoto map (DOM) are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings) are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data). The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  14. Possible Detection of an Emission Cyclotron Resonance Scattering Feature from the Accretion-Powered Pulsar 4U 1626-67

    Science.gov (United States)

    Iwakiri, W. B.; Terada, Y.; Tashiro, M. S.; Mihara, T.; Angelini, L.; Yamada, S.; Enoto, T.; Makishima, K.; Nakajima, M.; Yoshida, A.

    2012-01-01

    We present analysis of 4U 1626-67, a 7.7 s pulsar in a low-mass X-ray binary system, observed with the hard X-ray detector of the Japanese X-ray satellite Suzaku in 2006 March for a net exposure of 88 ks. The source was detected at an average 10-60 keY flux of approx 4 x 10-10 erg / sq cm/ s. The phase-averaged spectrum is reproduced well by combining a negative and positive power-law times exponential cutoff (NPEX) model modified at approx 37 keY by a cyclotron resonance scattering feature (CRSF). The phase-resolved analysis shows that the spectra at the bright phases are well fit by the NPEX with CRSF model. On the other hand. the spectrum in the dim phase lacks the NPEX high-energy cutoff component, and the CRSF can be reproduced by either an emission or an absorption profile. When fitting the dim phase spectrum with the NPEX plus Gaussian model. we find that the feature is better described in terms of an emission rather than an absorption profile. The statistical significance of this result, evaluated by means of an F test, is between 2.91 x 10(exp -3) and 1.53 x 10(exp -5), taking into account the systematic errors in the background evaluation of HXD-PIN. We find that the emission profile is more feasible than the absorption one for comparing the physical parameters in other phases. Therefore, we have possibly detected an emission line at the cyclotron resonance energy in the dim phase.

  15. Detection of Reflection Features in the Neutron Star Low-mass X-Ray Binary Serpens X-1 with NICER

    Science.gov (United States)

    Ludlam, R. M.; Miller, J. M.; Arzoumanian, Z.; Bult, P. M.; Cackett, E. M.; Chakrabarty, D.; Enoto, T.; Fabian, A. C.; Gendreau, K. C.; Guillot, S.; Homan, J.; Jaisawal, G. K.; Keek, L.; La Marr, B.; Malacaria, C.; Markwardt, C. B.; Steiner, J. F.; Strohmayer, T. E.

    2018-05-01

    We present Neutron Star Interior Composition Explorer (NICER) observations of the neutron star (NS) low-mass X-ray binary Serpens X-1 during the early mission phase in 2017. With the high spectral sensitivity and low-energy X-ray passband of NICER, we are able to detect the Fe L line complex in addition to the signature broad, asymmetric Fe K line. We confirm the presence of these lines by comparing the NICER data to archival observations with XMM-Newton/Reflection Grating Spectrometer (RGS) and NuSTAR. Both features originate close to the innermost stable circular orbit (ISCO). When modeling the lines with the relativistic line model RELLINE, we find that the Fe L blend requires an inner disk radius of {1.4}-0.1+0.2 R ISCO and Fe K is at {1.03}-0.03+0.13 R ISCO (errors quoted at 90%). This corresponds to a position of {17.3}-1.2+2.5 km and {12.7}-0.4+1.6 km for a canonical NS mass ({M}NS}=1.4 {M}ȯ ) and dimensionless spin value of a = 0. Additionally, we employ a new version of the RELXILL model tailored for NSs and determine that these features arise from a dense disk and supersolar Fe abundance.

  16. Integrative analysis of gene expression and DNA methylation using unsupervised feature extraction for detecting candidate cancer biomarkers.

    Science.gov (United States)

    Moon, Myungjin; Nakai, Kenta

    2018-04-01

    Currently, cancer biomarker discovery is one of the important research topics worldwide. In particular, detecting significant genes related to cancer is an important task for early diagnosis and treatment of cancer. Conventional studies mostly focus on genes that are differentially expressed in different states of cancer; however, noise in gene expression datasets and insufficient information in limited datasets impede precise analysis of novel candidate biomarkers. In this study, we propose an integrative analysis of gene expression and DNA methylation using normalization and unsupervised feature extractions to identify candidate biomarkers of cancer using renal cell carcinoma RNA-seq datasets. Gene expression and DNA methylation datasets are normalized by Box-Cox transformation and integrated into a one-dimensional dataset that retains the major characteristics of the original datasets by unsupervised feature extraction methods, and differentially expressed genes are selected from the integrated dataset. Use of the integrated dataset demonstrated improved performance as compared with conventional approaches that utilize gene expression or DNA methylation datasets alone. Validation based on the literature showed that a considerable number of top-ranked genes from the integrated dataset have known relationships with cancer, implying that novel candidate biomarkers can also be acquired from the proposed analysis method. Furthermore, we expect that the proposed method can be expanded for applications involving various types of multi-omics datasets.

  17. Computer-aided mass detection in mammography: False positive reduction via gray-scale invariant ranklet texture features

    International Nuclear Information System (INIS)

    Masotti, Matteo; Lanconelli, Nico; Campanini, Renato

    2009-01-01

    In this work, gray-scale invariant ranklet texture features are proposed for false positive reduction (FPR) in computer-aided detection (CAD) of breast masses. Two main considerations are at the basis of this proposal. First, false positive (FP) marks surviving our previous CAD system seem to be characterized by specific texture properties that can be used to discriminate them from masses. Second, our previous CAD system achieves invariance to linear/nonlinear monotonic gray-scale transformations by encoding regions of interest into ranklet images through the ranklet transform, an image transformation similar to the wavelet transform, yet dealing with pixels' ranks rather than with their gray-scale values. Therefore, the new FPR approach proposed herein defines a set of texture features which are calculated directly from the ranklet images corresponding to the regions of interest surviving our previous CAD system, hence, ranklet texture features; then, a support vector machine (SVM) classifier is used for discrimination. As a result of this approach, texture-based information is used to discriminate FP marks surviving our previous CAD system; at the same time, invariance to linear/nonlinear monotonic gray-scale transformations of the new CAD system is guaranteed, as ranklet texture features are calculated from ranklet images that have this property themselves by construction. To emphasize the gray-scale invariance of both the previous and new CAD systems, training and testing are carried out without any in-between parameters' adjustment on mammograms having different gray-scale dynamics; in particular, training is carried out on analog digitized mammograms taken from a publicly available digital database, whereas testing is performed on full-field digital mammograms taken from an in-house database. Free-response receiver operating characteristic (FROC) curve analysis of the two CAD systems demonstrates that the new approach achieves a higher reduction of FP marks

  18. SU-F-R-17: Advancing Glioblastoma Multiforme (GBM) Recurrence Detection with MRI Image Texture Feature Extraction and Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Yu, V; Ruan, D; Nguyen, D; Kaprealian, T; Chin, R; Sheng, K [UCLA School of Medicine, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To test the potential of early Glioblastoma Multiforme (GBM) recurrence detection utilizing image texture pattern analysis in serial MR images post primary treatment intervention. Methods: MR image-sets of six time points prior to the confirmed recurrence diagnosis of a GBM patient were included in this study, with each time point containing T1 pre-contrast, T1 post-contrast, T2-Flair, and T2-TSE images. Eight Gray-level co-occurrence matrix (GLCM) texture features including Contrast, Correlation, Dissimilarity, Energy, Entropy, Homogeneity, Sum-Average, and Variance were calculated from all images, resulting in a total of 32 features at each time point. A confirmed recurrent volume was contoured, along with an adjacent non-recurrent region-of-interest (ROI) and both volumes were propagated to all prior time points via deformable image registration. A support vector machine (SVM) with radial-basis-function kernels was trained on the latest time point prior to the confirmed recurrence to construct a model for recurrence classification. The SVM model was then applied to all prior time points and the volumes classified as recurrence were obtained. Results: An increase in classified volume was observed over time as expected. The size of classified recurrence maintained at a stable level of approximately 0.1 cm{sup 3} up to 272 days prior to confirmation. Noticeable volume increase to 0.44 cm{sup 3} was demonstrated at 96 days prior, followed by significant increase to 1.57 cm{sup 3} at 42 days prior. Visualization of the classified volume shows the merging of recurrence-susceptible region as the volume change became noticeable. Conclusion: Image texture pattern analysis in serial MR images appears to be sensitive to detecting the recurrent GBM a long time before the recurrence is confirmed by a radiologist. The early detection may improve the efficacy of targeted intervention including radiosurgery. More patient cases will be included to create a generalizable

  19. AUTOMATED DETECTION OF MITOTIC FIGURES IN BREAST CANCER HISTOPATHOLOGY IMAGES USING GABOR FEATURES AND DEEP NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Maqlin Paramanandam

    2016-11-01

    Full Text Available The count of mitotic figures in Breast cancer histopathology slides is the most significant independent prognostic factor enabling determination of the proliferative activity of the tumor. In spite of the strict protocols followed, the mitotic counting activity suffers from subjectivity and considerable amount of observer variability despite being a laborious task. Interest in automated detection of mitotic figures has been rekindled with the advent of Whole Slide Scanners. Subsequently mitotic detection grand challenge contests have been held in recent years and several research methodologies developed by their participants. This paper proposes an efficient mitotic detection methodology for Hematoxylin and Eosin stained Breast cancer Histopathology Images using Gabor features and a Deep Belief Network- Deep Neural Network architecture (DBN-DNN. The proposed method has been evaluated on breast histopathology images from the publicly available dataset from MITOS contest held at the ICPR 2012 conference. It contains 226 mitoses annotated on 35 HPFs by several pathologists and 15 testing HPFs, yielding an F-measure of 0.74. In addition the said methodology was also tested on 3 slides from the MITOSIS- ATYPIA grand challenge held at the ICPR 2014 conference, an extension of MITOS containing 749 mitoses annotated on 1200 HPFs, by pathologists worldwide. This study has employed 3 slides (294 HPFs from the MITOS-ATYPIA training dataset in its evaluation and the results showed F-measures 0.65, 0.72and 0.74 for each slide. The proposed method is fast and computationally simple yet its accuracy and specificity is comparable to the best winning methods of the aforementioned grand challenges

  20. xMSanalyzer: automated pipeline for improved feature detection and downstream analysis of large-scale, non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Uppal Karan

    2013-01-01

    Full Text Available Abstract Background Detection of low abundance metabolites is important for de novo mapping of metabolic pathways related to diet, microbiome or environmental exposures. Multiple algorithms are available to extract m/z features from liquid chromatography-mass spectral data in a conservative manner, which tends to preclude detection of low abundance chemicals and chemicals found in small subsets of samples. The present study provides software to enhance such algorithms for feature detection, quality assessment, and annotation. Results xMSanalyzer is a set of utilities for automated processing of metabolomics data. The utilites can be classified into four main modules to: 1 improve feature detection for replicate analyses by systematic re-extraction with multiple parameter settings and data merger to optimize the balance between sensitivity and reliability, 2 evaluate sample quality and feature consistency, 3 detect feature overlap between datasets, and 4 characterize high-resolution m/z matches to small molecule metabolites and biological pathways using multiple chemical databases. The package was tested with plasma samples and shown to more than double the number of features extracted while improving quantitative reliability of detection. MS/MS analysis of a random subset of peaks that were exclusively detected using xMSanalyzer confirmed that the optimization scheme improves detection of real metabolites. Conclusions xMSanalyzer is a package of utilities for data extraction, quality control assessment, detection of overlapping and unique metabolites in multiple datasets, and batch annotation of metabolites. The program was designed to integrate with existing packages such as apLCMS and XCMS, but the framework can also be used to enhance data extraction for other LC/MS data software.

  1. Radiomic features for prostate cancer detection on MRI differ between the transition and peripheral zones: Preliminary findings from a multi-institutional study.

    Science.gov (United States)

    Ginsburg, Shoshana B; Algohary, Ahmad; Pahwa, Shivani; Gulani, Vikas; Ponsky, Lee; Aronen, Hannu J; Boström, Peter J; Böhm, Maret; Haynes, Anne-Maree; Brenner, Phillip; Delprado, Warick; Thompson, James; Pulbrock, Marley; Taimen, Pekka; Villani, Robert; Stricker, Phillip; Rastinehad, Ardeshir R; Jambor, Ivan; Madabhushi, Anant

    2017-07-01

    To evaluate in a multi-institutional study whether radiomic features useful for prostate cancer (PCa) detection from 3 Tesla (T) multi-parametric MRI (mpMRI) in the transition zone (TZ) differ from those in the peripheral zone (PZ). 3T mpMRI, including T2-weighted (T2w), apparent diffusion coefficient (ADC) maps, and dynamic contrast-enhanced MRI (DCE-MRI), were retrospectively obtained from 80 patients at three institutions. This study was approved by the institutional review board of each participating institution. First-order statistical, co-occurrence, and wavelet features were extracted from T2w MRI and ADC maps, and contrast kinetic features were extracted from DCE-MRI. Feature selection was performed to identify 10 features for PCa detection in the TZ and PZ, respectively. Two logistic regression classifiers used these features to detect PCa and were evaluated by area under the receiver-operating characteristic curve (AUC). Classifier performance was compared with a zone-ignorant classifier. Radiomic features that were identified as useful for PCa detection differed between TZ and PZ. When classification was performed on a per-voxel basis, a PZ-specific classifier detected PZ tumors on an independent test set with significantly higher accuracy (AUC = 0.61-0.71) than a zone-ignorant classifier trained to detect cancer throughout the entire prostate (P  0.14) were obtained for all institutions. A zone-aware classifier significantly improves the accuracy of cancer detection in the PZ. 3 Technical Efficacy: Stage 2 J. MAGN. RESON. IMAGING 2017;46:184-193. © 2016 International Society for Magnetic Resonance in Medicine.

  2. A Robust Motion Artifact Detection Algorithm for Accurate Detection of Heart Rates From Photoplethysmographic Signals Using Time-Frequency Spectral Features.

    Science.gov (United States)

    Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H

    2017-09-01

    Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach

  3. Detecting PHG frames in wireless capsule endoscopy video by integrating rough global dominate-color with fine local texture features

    Science.gov (United States)

    Liu, Xiaoqi; Wang, Chengliang; Bai, Jianying; Liao, Guobin

    2018-02-01

    Portal hypertensive gastropathy (PHG) is common in gastrointestinal (GI) diseases, and a severe stage of PHG (S-PHG) is a source of gastrointestinal active bleeding. Generally, the diagnosis of PHG is made visually during endoscopic examination; compared with traditional endoscopy, (wireless capsule endoscopy) WCE with noninvasive and painless is chosen as a prevalent tool for visual observation of PHG. However, accurate measurement of WCE images with PHG is a difficult task due to faint contrast and confusing variations in background gastric mucosal tissue for physicians. Therefore, this paper proposes a comprehensive methodology to automatically detect S-PHG images in WCE video to help physicians accurately diagnose S-PHG. Firstly, a rough dominatecolor-tone extraction approach is proposed for better describing global color distribution information of gastric mucosa. Secondly, a hybrid two-layer texture acquisition model is designed by integrating co-occurrence matrix into local binary pattern to depict complex and unique gastric mucosal microstructure local variation. Finally, features of mucosal color and microstructure texture are merged into linear support vector machine to accomplish this automatic classification task. Experiments were implemented on an annotated data set including 1,050 SPHG and 1,370 normal images collected from 36 real patients of different nationalities, ages and genders. By comparison with three traditional texture extraction methods, our method, combined with experimental results, performs best in detection of S-PHG images in WCE video: the maximum of accuracy, sensitivity and specificity reach 0.90, 0.92 and 0.92 respectively.

  4. Cascade detection for the extraction of localized sequence features; specificity results for HIV-1 protease and structure-function results for the Schellman loop.

    Science.gov (United States)

    Newell, Nicholas E

    2011-12-15

    The extraction of the set of features most relevant to function from classified biological sequence sets is still a challenging problem. A central issue is the determination of expected counts for higher order features so that artifact features may be screened. Cascade detection (CD), a new algorithm for the extraction of localized features from sequence sets, is introduced. CD is a natural extension of the proportional modeling techniques used in contingency table analysis into the domain of feature detection. The algorithm is successfully tested on synthetic data and then applied to feature detection problems from two different domains to demonstrate its broad utility. An analysis of HIV-1 protease specificity reveals patterns of strong first-order features that group hydrophobic residues by side chain geometry and exhibit substantial symmetry about the cleavage site. Higher order results suggest that favorable cooperativity is weak by comparison and broadly distributed, but indicate possible synergies between negative charge and hydrophobicity in the substrate. Structure-function results for the Schellman loop, a helix-capping motif in proteins, contain strong first-order features and also show statistically significant cooperativities that provide new insights into the design of the motif. These include a new 'hydrophobic staple' and multiple amphipathic and electrostatic pair features. CD should prove useful not only for sequence analysis, but also for the detection of multifactor synergies in cross-classified data from clinical studies or other sources. Windows XP/7 application and data files available at: https://sites.google.com/site/cascadedetect/home. nacnewell@comcast.net Supplementary information is available at Bioinformatics online.

  5. Applicability of computer-aided comprehensive tool (LINDA: LINeament Detection and Analysis) and shaded digital elevation model for characterizing and interpreting morphotectonic features from lineaments

    Science.gov (United States)

    Masoud, Alaa; Koike, Katsuaki

    2017-09-01

    Detection and analysis of linear features related to surface and subsurface structures have been deemed necessary in natural resource exploration and earth surface instability assessment. Subjectivity in choosing control parameters required in conventional methods of lineament detection may cause unreliable results. To reduce this ambiguity, we developed LINDA (LINeament Detection and Analysis), an integrated tool with graphical user interface in Visual Basic. This tool automates processes of detection and analysis of linear features from grid data of topography (digital elevation model; DEM), gravity and magnetic surfaces, as well as data from remote sensing imagery. A simple interface with five display windows forms a user-friendly interactive environment. The interface facilitates grid data shading, detection and grouping of segments, lineament analyses for calculating strike and dip and estimating fault type, and interactive viewing of lineament geometry. Density maps of the center and intersection points of linear features (segments and lineaments) are also included. A systematic analysis of test DEMs and Landsat 7 ETM+ imagery datasets in the North and South Eastern Deserts of Egypt is implemented to demonstrate the capability of LINDA and correct use of its functions. Linear features from the DEM are superior to those from the imagery in terms of frequency, but both linear features agree with location and direction of V-shaped valleys and dykes and reference fault data. Through the case studies, LINDA applicability is demonstrated to highlight dominant structural trends, which can aid understanding of geodynamic frameworks in any region.

  6. Metabolic costs imposed by hydrostatic pressure constrain bathymetric range in the lithodid crab Lithodes maja.

    Science.gov (United States)

    Brown, Alastair; Thatje, Sven; Morris, James P; Oliphant, Andrew; Morgan, Elizabeth A; Hauton, Chris; Jones, Daniel O B; Pond, David W

    2017-11-01

    The changing climate is shifting the distributions of marine species, yet the potential for shifts in depth distributions is virtually unexplored. Hydrostatic pressure is proposed to contribute to a physiological bottleneck constraining depth range extension in shallow-water taxa. However, bathymetric limitation by hydrostatic pressure remains undemonstrated, and the mechanism limiting hyperbaric tolerance remains hypothetical. Here, we assess the effects of hydrostatic pressure in the lithodid crab Lithodes maja (bathymetric range 4-790 m depth, approximately equivalent to 0.1 to 7.9 MPa hydrostatic pressure). Heart rate decreased with increasing hydrostatic pressure, and was significantly lower at ≥10.0 MPa than at 0.1 MPa. Oxygen consumption increased with increasing hydrostatic pressure to 12.5 MPa, before decreasing as hydrostatic pressure increased to 20.0 MPa; oxygen consumption was significantly higher at 7.5-17.5 MPa than at 0.1 MPa. Increases in expression of genes associated with neurotransmission, metabolism and stress were observed between 7.5 and 12.5 MPa. We suggest that hyperbaric tolerance in L maja may be oxygen-limited by hyperbaric effects on heart rate and metabolic rate, but that L maja 's bathymetric range is limited by metabolic costs imposed by the effects of high hydrostatic pressure. These results advocate including hydrostatic pressure in a complex model of environmental tolerance, where energy limitation constrains biogeographic range, and facilitate the incorporation of hydrostatic pressure into the broader metabolic framework for ecology and evolution. Such an approach is crucial for accurately projecting biogeographic responses to changing climate, and for understanding the ecology and evolution of life at depth. © 2017. Published by The Company of Biologists Ltd.

  7. Joint Interpretation of Bathymetric and Gravity Anomaly Maps Using Cross and Dot-Products.

    Science.gov (United States)

    Jilinski, Pavel; Fontes, Sergio Luiz

    2010-05-01

    0.1 Summary We present the results of joint map interpretation technique based on cross and dot-products applied to bathymetric and gravity anomaly gradients maps. According to the theory (Gallardo, Meju, 2004) joint interpretation of different gradient characteristics help to localize and empathize patterns unseen on one image interpretation and gives information about the correlation of different spatial data. Values of angles between gradients and their cross and dot-product were used. This technique helps to map unseen relations between bathymetric and gravity anomaly maps if they are analyzed separately. According to the method applied for the southern segment of Eastern-Brazilian coast bathymetrical and gravity anomaly gradients indicates a strong source-effect relation between them. The details of the method and the obtained results are discussed. 0.2 Introduction We applied this method to investigate the correlation between bathymetric and gravity anomalies at the southern segment of the Eastern-Brazilian coast. Gridded satellite global marine gravity data and bathymetrical data were used. The studied area is located at the Eastern- Brazilian coast between the 20° W and 30° W meridians and 15° S and 25° S parallels. The volcanic events responsible for the uncommon width of the continental shelf at the Abrolhos bank also were responsible for the formation of the Abrolhos islands and seamounts including the major Vitoria-Trindade chain. According to the literature this volcanic structures are expected to have a corresponding gravity anomaly (McKenzie, 1976, Zembruscki, S.G. 1979). The main objective of this study is to develop and test joint image interpretation method to compare spatial data and analyze its relations. 0.3 Theory and Method 0.3.1 Data sources The bathymetrical satellite data were derived bathymetry 2-minute grid of the ETOPO2v2 obtained from NOAA's National Geophysical Data Center (http://www.ngdc.noaa.gov). The satellite marine gravity 1

  8. RankProd 2.0: a refactored bioconductor package for detecting differentially expressed features in molecular profiling datasets.

    Science.gov (United States)

    Del Carratore, Francesco; Jankevics, Andris; Eisinga, Rob; Heskes, Tom; Hong, Fangxin; Breitling, Rainer

    2017-09-01

    The Rank Product (RP) is a statistical technique widely used to detect differentially expressed features in molecular profiling experiments such as transcriptomics, metabolomics and proteomics studies. An implementation of the RP and the closely related Rank Sum (RS) statistics has been available in the RankProd Bioconductor package for several years. However, several recent advances in the understanding of the statistical foundations of the method have made a complete refactoring of the existing package desirable. We implemented a completely refactored version of the RankProd package, which provides a more principled implementation of the statistics for unpaired datasets. Moreover, the permutation-based P -value estimation methods have been replaced by exact methods, providing faster and more accurate results. RankProd 2.0 is available at Bioconductor ( https://www.bioconductor.org/packages/devel/bioc/html/RankProd.html ) and as part of the mzMatch pipeline ( http://www.mzmatch.sourceforge.net ). rainer.breitling@manchester.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  9. Evaluation of Short-Term Cepstral Based Features for Detection of Parkinson’s Disease Severity Levels through Speech signals

    Science.gov (United States)

    Oung, Qi Wei; Nisha Basah, Shafriza; Muthusamy, Hariharan; Vijean, Vikneswaran; Lee, Hoileong

    2018-03-01

    Parkinson’s disease (PD) is one type of progressive neurodegenerative disease known as motor system syndrome, which is due to the death of dopamine-generating cells, a region of the human midbrain. PD normally affects people over 60 years of age, which at present has influenced a huge part of worldwide population. Lately, many researches have shown interest into the connection between PD and speech disorders. Researches have revealed that speech signals may be a suitable biomarker for distinguishing between people with Parkinson’s (PWP) from healthy subjects. Therefore, early diagnosis of PD through the speech signals can be considered for this aim. In this research, the speech data are acquired based on speech behaviour as the biomarker for differentiating PD severity levels (mild and moderate) from healthy subjects. Feature extraction algorithms applied are Mel Frequency Cepstral Coefficients (MFCC), Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC), and Weighted Linear Prediction Cepstral Coefficients (WLPCC). For classification, two types of classifiers are used: k-Nearest Neighbour (KNN) and Probabilistic Neural Network (PNN). The experimental results demonstrated that PNN classifier and KNN classifier achieve the best average classification performance of 92.63% and 88.56% respectively through 10-fold cross-validation measures. Favourably, the suggested techniques have the possibilities of becoming a new choice of promising tools for the PD detection with tremendous performance.

  10. A Multi-Functional Microelectrode Array Featuring 59760 Electrodes, 2048 Electrophysiology Channels, Stimulation, Impedance Measurement and Neurotransmitter Detection Channels.

    Science.gov (United States)

    Dragas, Jelena; Viswam, Vijay; Shadmani, Amir; Chen, Yihui; Bounik, Raziyeh; Stettler, Alexander; Radivojevic, Milos; Geissler, Sydney; Obien, Marie; Müller, Jan; Hierlemann, Andreas

    2017-06-01

    Biological cells are characterized by highly complex phenomena and processes that are, to a great extent, interdependent. To gain detailed insights, devices designed to study cellular phenomena need to enable tracking and manipulation of multiple cell parameters in parallel; they have to provide high signal quality and high spatiotemporal resolution. To this end, we have developed a CMOS-based microelectrode array system that integrates six measurement and stimulation functions, the largest number to date. Moreover, the system features the largest active electrode array area to date (4.48×2.43 mm 2 ) to accommodate 59,760 electrodes, while its power consumption, noise characteristics, and spatial resolution (13.5 μm electrode pitch) are comparable to the best state-of-the-art devices. The system includes: 2,048 action-potential (AP, bandwidth: 300 Hz to 10 kHz) recording units, 32 local-field-potential (LFP, bandwidth: 1 Hz to 300 Hz) recording units, 32 current recording units, 32 impedance measurement units, and 28 neurotransmitter detection units, in addition to the 16 dual-mode voltage-only or current/voltage-controlled stimulation units. The electrode array architecture is based on a switch matrix, which allows for connecting any measurement/stimulation unit to any electrode in the array and for performing different measurement/stimulation functions in parallel.

  11. Detection and validation of single feature polymorphisms in cowpea (Vigna unguiculata L. Walp using a soybean genome array

    Directory of Open Access Journals (Sweden)

    Wanamaker Steve

    2008-02-01

    Full Text Available Abstract Background Cowpea (Vigna unguiculata L. Walp is an important food and fodder legume of the semiarid tropics and subtropics worldwide, especially in sub-Saharan Africa. High density genetic linkage maps are needed for marker assisted breeding but are not available for cowpea. A single feature polymorphism (SFP is a microarray-based marker which can be used for high throughput genotyping and high density mapping. Results Here we report detection and validation of SFPs in cowpea using a readily available soybean (Glycine max genome array. Robustified projection pursuit (RPP was used for statistical analysis using RNA as a surrogate for DNA. Using a 15% outlying score cut-off, 1058 potential SFPs were enumerated between two parents of a recombinant inbred line (RIL population segregating for several important traits including drought tolerance, Fusarium and brown blotch resistance, grain size and photoperiod sensitivity. Sequencing of 25 putative polymorphism-containing amplicons yielded a SFP probe set validation rate of 68%. Conclusion We conclude that the Affymetrix soybean genome array is a satisfactory platform for identification of some 1000's of SFPs for cowpea. This study provides an example of extension of genomic resources from a well supported species to an orphan crop. Presumably, other legume systems are similarly tractable to SFP marker development using existing legume array resources.

  12. Recent Advances in Bathymetric Surveying of Continental Shelf Regions Using Autonomous Vehicles

    Science.gov (United States)

    Holland, K. T.; Calantoni, J.; Slocum, D.

    2016-02-01

    Obtaining bathymetric observations within the continental shelf in areas closer to the shore is often time consuming and dangerous, especially when uncharted shoals and rocks present safety concerns to survey ships and launches. However, surveys in these regions are critically important to numerical simulation of oceanographic processes, as bathymetry serves as the bottom boundary condition in operational forecasting models. We will present recent progress in bathymetric surveying using both traditional vessels retrofitted for autonomous operations and relatively inexpensive, small team deployable, Autonomous Underwater Vehicles (AUV). Both systems include either high-resolution multibeam echo sounders or interferometric sidescan sonar sensors with integrated inertial navigation system capabilities consistent with present commercial-grade survey operations. The advantages and limitations of these two configurations employing both unmanned and autonomous strategies are compared using results from several recent survey operations. We will demonstrate how sensor data collected from unmanned platforms can augment or even replace traditional data collection technologies. Oceanographic observations (e.g., sound speed, temperature and currents) collected simultaneously with bathymetry using autonomous technologies provide additional opportunities for advanced data assimilation in numerical forecasts. Discussion focuses on our vision for unmanned and autonomous systems working in conjunction with manned or in-situ systems to optimally and simultaneously collect data in environmentally hostile or difficult to reach areas.

  13. Bathymetric map and area/capacity table for Castle Lake, Washington

    Science.gov (United States)

    Mosbrucker, Adam R.; Spicer, Kurt R.

    2017-11-14

    The May 18, 1980, eruption of Mount St. Helens produced a 2.5-cubic-kilometer debris avalanche that dammed South Fork Castle Creek, causing Castle Lake to form behind a 20-meter-tall blockage. Risk of a catastrophic breach of the newly impounded lake led to outlet channel stabilization work, aggressive monitoring programs, mapping efforts, and blockage stability studies. Despite relatively large uncertainty, early mapping efforts adequately supported several lake breakout models, but have limited applicability to current lake monitoring and hazard assessment. Here, we present the results of a bathymetric survey conducted in August 2012 with the purpose of (1) verifying previous volume estimates, (2) computing an area/capacity table, and (3) producing a bathymetric map. Our survey found seasonal lake volume ranges between 21.0 and 22.6 million cubic meters with a fundamental vertical accuracy representing 0.88 million cubic meters. Lake surface area ranges between 1.13 and 1.16 square kilometers. Relationships developed by our results allow the computation of lake volume from near real-time lake elevation measurements or from remotely sensed imagery.

  14. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning

    Science.gov (United States)

    Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George

    2018-06-01

    Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional

  15. Mammographic features of screening detected pT1 (a–b) invasive breast cancer using BI-RADS lexicon

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Xavier, E-mail: xbarga@clinic.ub.es [Department of Radiology (CDIC), Hospital Clínic de Barcelona, C/Villarroel, 170, 08036 Barcelona (Spain); Santamaría, Gorane, E-mail: gsanta@clinic.ub.es [Department of Radiology (CDIC), Hospital Clínic de Barcelona, C/Villarroel, 170, 08036 Barcelona (Spain); Velasco, Martín, E-mail: mvelasco@clinic.ub.es [Department of Radiology (CDIC), Hospital Clínic de Barcelona, C/Villarroel, 170, 08036 Barcelona (Spain); Amo, Montse del, E-mail: mdelamo@clinic.ub.es [Department of Radiology (CDIC), Hospital Clínic de Barcelona, C/Villarroel, 170, 08036 Barcelona (Spain); Arguis, Pedro, E-mail: parguis@clinic.ub.es [Department of Radiology (CDIC), Hospital Clínic de Barcelona, C/Villarroel, 170, 08036 Barcelona (Spain); Burrel, Marta, E-mail: mburrel@clinic.ub.es [Department of Radiology (CDIC), Hospital Clínic de Barcelona, C/Villarroel, 170, 08036 Barcelona (Spain); Capurro, Sebastian, E-mail: scapurro@clinic.ub.es [Department of Radiology (CDIC), Hospital Clínic de Barcelona, C/Villarroel, 170, 08036 Barcelona (Spain)

    2012-10-15

    Aim: To describe mammographic features in screening detected invasive breast cancer less than or equal to 10 mm using Breast Imaging Reporting and Data System lexicon in full-field digital mammography. Patients and methods: A retrospective analysis of 123 pT1 (a–b) invasive breast cancers in women aged 50–69 years from our screening program. Radiologic patterns were: masses, calcifications, distortions, asymmetries and mixed. Masses: shape, margins and density, and calcifications: morphology, number of flecks and size of the cluster were taken into account, following Breast Imaging Reporting and Data System terminology. Results: We found 61 masses (49.6%), 8 masses with calcifications (6.5%), 30 groups of calcifications (24.4%), 19 architectural distortions (15.4%), 1 architectural distortion with calcifications (0.8%), 4 asymmetries (3.2%). Sixty out of 69 masses were irregular in shape, 6 lobular, 2 ovals and 1 round. Thirty-four showed ill-defined margins, 29 spiculated and 6 microlobulated. Most of them showed a density similar to surrounding fibroglandular tissue. Calcifications were pleomorphic or fine linear in 24 of 30 (80%). Most of cases showed more than 10 flecks and a size greater than 1 cm. Conclusion: The predominant radiologic finding is an irregular, isodense mass those margins tend to share different descriptors, being ill-defined margins the most constant finding. Calcifications representing invasive cancer are predominantly pleomorphic with more than 10 flecks per cm. Architectural distortion and invasive tubular carcinoma are more common than reported in general series.

  16. Mammographic features of screening detected pT1 (a–b) invasive breast cancer using BI-RADS lexicon

    International Nuclear Information System (INIS)

    Bargalló, Xavier; Santamaría, Gorane; Velasco, Martín; Amo, Montse del; Arguis, Pedro; Burrel, Marta; Capurro, Sebastian

    2012-01-01

    Aim: To describe mammographic features in screening detected invasive breast cancer less than or equal to 10 mm using Breast Imaging Reporting and Data System lexicon in full-field digital mammography. Patients and methods: A retrospective analysis of 123 pT1 (a–b) invasive breast cancers in women aged 50–69 years from our screening program. Radiologic patterns were: masses, calcifications, distortions, asymmetries and mixed. Masses: shape, margins and density, and calcifications: morphology, number of flecks and size of the cluster were taken into account, following Breast Imaging Reporting and Data System terminology. Results: We found 61 masses (49.6%), 8 masses with calcifications (6.5%), 30 groups of calcifications (24.4%), 19 architectural distortions (15.4%), 1 architectural distortion with calcifications (0.8%), 4 asymmetries (3.2%). Sixty out of 69 masses were irregular in shape, 6 lobular, 2 ovals and 1 round. Thirty-four showed ill-defined margins, 29 spiculated and 6 microlobulated. Most of them showed a density similar to surrounding fibroglandular tissue. Calcifications were pleomorphic or fine linear in 24 of 30 (80%). Most of cases showed more than 10 flecks and a size greater than 1 cm. Conclusion: The predominant radiologic finding is an irregular, isodense mass those margins tend to share different descriptors, being ill-defined margins the most constant finding. Calcifications representing invasive cancer are predominantly pleomorphic with more than 10 flecks per cm. Architectural distortion and invasive tubular carcinoma are more common than reported in general series

  17. Remote measurement of river discharge using thermal particle image velocimetry (PIV) and various sources of bathymetric information

    Science.gov (United States)

    Legleiter, Carl; Kinzel, Paul J.; Nelson, Jonathan M.

    2017-01-01

    Although river discharge is a fundamental hydrologic quantity, conventional methods of streamgaging are impractical, expensive, and potentially dangerous in remote locations. This study evaluated the potential for measuring discharge via various forms of remote sensing, primarily thermal imaging of flow velocities but also spectrally-based depth retrieval from passive optical image data. We acquired thermal image time series from bridges spanning five streams in Alaska and observed strong agreement between velocities measured in situ and those inferred by Particle Image Velocimetry (PIV), which quantified advection of thermal features by the flow. The resulting surface velocities were converted to depth-averaged velocities by applying site-specific, calibrated velocity indices. Field spectra from three clear-flowing streams provided strong relationships between depth and reflectance, suggesting that, under favorable conditions, spectrally-based bathymetric mapping could complement thermal PIV in a hybrid approach to remote sensing of river discharge; this strategy would not be applicable to larger, more turbid rivers, however. A more flexible and efficient alternative might involve inferring depth from thermal data based on relationships between depth and integral length scales of turbulent fluctuations in temperature, captured as variations in image brightness. We observed moderately strong correlations for a site-aggregated data set that reduced station-to-station variability but encompassed a broad range of depths. Discharges calculated using thermal PIV-derived velocities were within 15% of in situ measurements when combined with depths measured directly in the field or estimated from field spectra and within 40% when the depth information also was derived from thermal images. The results of this initial, proof-of-concept investigation suggest that remote sensing techniques could facilitate measurement of river discharge.

  18. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching.

    Science.gov (United States)

    Wang, Guohua; Liu, Qiong

    2015-12-21

    Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians' head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians' size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only.

  19. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching

    Directory of Open Access Journals (Sweden)

    Guohua Wang

    2015-12-01

    Full Text Available Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians’ head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians’ size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only.

  20. iGRaND: an invariant frame for RGBD sensor feature detection and descriptor extraction with applications

    Science.gov (United States)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a new 3D RGBD image feature, referred to as iGRaND, for use in real-time systems that use these sensors for tracking, motion capture, or robotic vision applications. iGRaND features use a novel local reference frame derived from the image gradient and depth normal (hence iGRaND) that is invariant to scale and viewpoint for Lambertian surfaces. Using this reference frame, Euclidean invariant feature components are computed at keypoints which fuse local geometric shape information with surface appearance information. The performance of the feature for real-time odometry is analyzed and its computational complexity and accuracy is compared with leading alternative 3D features.

  1. A method of evolving novel feature extraction algorithms for detecting buried objects in FLIR imagery using genetic programming

    Science.gov (United States)

    Paino, A.; Keller, J.; Popescu, M.; Stone, K.

    2014-06-01

    In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.

  2. A Quantum Hybrid PSO Combined with Fuzzy k-NN Approach to Feature Selection and Cell Classification in Cervical Cancer Detection

    Directory of Open Access Journals (Sweden)

    Abdullah M. Iliyasu

    2017-12-01

    Full Text Available A quantum hybrid (QH intelligent approach that blends the adaptive search capability of the quantum-behaved particle swarm optimisation (QPSO method with the intuitionistic rationality of traditional fuzzy k-nearest neighbours (Fuzzy k-NN algorithm (known simply as the Q-Fuzzy approach is proposed for efficient feature selection and classification of cells in cervical smeared (CS images. From an initial multitude of 17 features describing the geometry, colour, and texture of the CS images, the QPSO stage of our proposed technique is used to select the best subset features (i.e., global best particles that represent a pruned down collection of seven features. Using a dataset of almost 1000 images, performance evaluation of our proposed Q-Fuzzy approach assesses the impact of our feature selection on classification accuracy by way of three experimental scenarios that are compared alongside two other approaches: the All-features (i.e., classification without prior feature selection and another hybrid technique combining the standard PSO algorithm with the Fuzzy k-NN technique (P-Fuzzy approach. In the first and second scenarios, we further divided the assessment criteria in terms of classification accuracy based on the choice of best features and those in terms of the different categories of the cervical cells. In the third scenario, we introduced new QH hybrid techniques, i.e., QPSO combined with other supervised learning methods, and compared the classification accuracy alongside our proposed Q-Fuzzy approach. Furthermore, we employed statistical approaches to establish qualitative agreement with regards to the feature selection in the experimental scenarios 1 and 3. The synergy between the QPSO and Fuzzy k-NN in the proposed Q-Fuzzy approach improves classification accuracy as manifest in the reduction in number cell features, which is crucial for effective cervical cancer detection and diagnosis.

  3. Giordano Bruno crater on the Moon: Detection and Mapping of Hydration Features of Endogenic and/or Exogenic Nature

    Science.gov (United States)

    Saran Bhiravarasu, Sriram; Bhattacharya, Satadru; Chauhan, Prakash

    2017-10-01

    We analyze high resolution spectral and spatial data from the recent lunar missions and report the presence of strong hydration features within the inner flank, hummocky floor, ejecta and impact melt deposits of crater Giordano Bruno. Hydroxyl-bearing lithologies at Giordano Bruno are characterized primarily by a prominent absorption feature near 2800 nm, the band minima of which goes beyond 3000 nm. The hydration features are found to be associated with low-Ca pyroxene-bearing noritic lithologies along the inner crater flanks, whereas similar features are also seen within the hummocky crater floor in association with shocked plagioclase-bearing anorthositic lithology. Interestingly, the ejecta blanket is characterized by sharp, narrow features centered near 2800 nm similar to the features previously reported from Compton-Belkovich volcanic complex and central peak of crater Theophilus. The low-Ca pyroxene-bearing rock exposures within the crater inner flanks are characterized by both presence and absence of the hydration features. Enhanced hydration is also seen within the ejecta blanket covering the nearby Harkhebi K and J craters. We also analyze the impact melts and ejecta using radar images at regions interior and exterior to the Giordano Bruno crater rim.Anomalous behaviors of hydration feature associated with low-Ca pyroxene-rich exposures, its nature and occurrences within the impact melt sheets inside the crater along with the ejecta blankets could possibly indicate endogenic and/or exogenic nature of the observed hydration feature. Initial results indicate the presence of strongest hydration feature in the partially shadowed pole-facing slopes (with low-Ca pyroxene-bearing exposures) and its complete absence in the equator-facing sun-lit slopes. This hints at a possible exogenic origin, whereas the same feature occurring (with same mineral) under both sun-lit and shadowed conditions suggest it to be of magmatic origin. We propose that the heterogeneous

  4. Detection of the 3.4- and 2.8-micron emission features in Comet Bradfield (1987s)

    International Nuclear Information System (INIS)

    Brooke, T.Y.; Tokunaga, A.T.; Knacke, R.F.; Owen, T.C.; Mumma, M.J.

    1990-01-01

    Comet Bradfield's 3.4-micron C-H emission feature at 3.4 microns, as well as the emission feature near 2.8 microns, exhibit spectral shapes similar to those noted in Comets Halley and Wilson; the derived abundances of the C-H bonds in all three comets are also comparable (within water production rate uncertainties). These data support the hypothesis that the species responsible for the 3.4- and 2.8-micron features may be common to all comets. Beyond this, the widely differing ages of the three comets suggest that the 3.4-micron feature-emitting organics are not the product of surface irradiation processes after the comets' formation. 25 refs

  5. Statistical and Spatial Analysis of Bathymetric Data for the St. Clair River, 1971-2007

    Science.gov (United States)

    Bennion, David

    2009-01-01

    To address questions concerning ongoing geomorphic processes in the St. Clair River, selected bathymetric datasets spanning 36 years were analyzed. Comparisons of recent high-resolution datasets covering the upper river indicate a highly variable, active environment. Although statistical and spatial comparisons of the datasets show that some changes to the channel size and shape have taken place during the study period, uncertainty associated with various survey methods and interpolation processes limit the statistically certain results. The methods used to spatially compare the datasets are sensitive to small variations in position and depth that are within the range of uncertainty associated with the datasets. Characteristics of the data, such as the density of measured points and the range of values surveyed, can also influence the results of spatial comparison. With due consideration of these limitations, apparently active and ongoing areas of elevation change in the river are mapped and discussed.

  6. Structural interpretation of the Konkan basin, southwestern continental margin of India, based on magnetic and bathymetric data

    Digital Repository Service at National Institute of Oceanography (India)

    Subrahmanyam, V.; Krishna, K.S.; Murty, G.P.S.; Rao, D.G.; Ramana, M.V.; Rao, M.G.

    Magnetic and bathymetric studies on the Konkan basin of the southwestern continental margin of India reveal prominent NNW-SSE, NW-SE, ENE-WSW, and WNE-ESE structural trends. The crystalline basement occurs at about 5-6 km below the mean sea level. A...

  7. Bathymetric highs in the mid-slope region of the western continental margin of India - Structure and mode of origin

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, D.G.; Paropkari, A.L.; Krishna, K.S.; Chaubey, A.K.; Ajay, K.K.; Kodagali, V.N.

    Analysis of the multi- and single beam bathymetric, seismic, magnetic and free-air gravity (ship-borne and satellite derived) data from the western continental margin of India between 12 degrees 40 minutes N and 15 degrees N had revealed...

  8. Bathymetric survey of the Cayuga Inlet flood-control channel and selected tributaries in Ithaca, New York, 2016

    Science.gov (United States)

    Wernly, John F.; Nystrom, Elizabeth A.; Coon, William F.

    2017-09-08

    From July 14 to July 20, 2016, the U.S. Geological Survey, in cooperation with the City of Ithaca, New York, and the New York State Department of State, surveyed the bathymetry of the Cayuga Inlet flood-control channel and the mouths of selected tributaries to Cayuga Inlet and Cayuga Lake in Ithaca, N.Y. The flood-control channel, built by the U.S. Army Corps of Engineers between 1965 and 1970, was designed to convey flood flows from the Cayuga Inlet watershed through the City of Ithaca and minimize possible flood damages. Since that time, the channel has infrequently been maintained by dredging, and sediment accumulation and resultant shoaling have greatly decreased the conveyance of the channel and its navigational capability.U.S. Geological Survey personnel collected bathymetric data by using an acoustic Doppler current profiler. The survey produced a dense dataset of water depths that were converted to bottom elevations. These elevations were then used to generate a geographic information system bathymetric surface. The bathymetric data and resultant bathymetric surface show the current condition of the channel and provide the information that governmental agencies charged with maintaining the Cayuga Inlet for flood-control and navigational purposes need to make informed decisions regarding future maintenance measures.

  9. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    Science.gov (United States)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  10. Qualitative simulation of bathymetric changes due to reservoir sedimentation: A Japanese case study.

    Directory of Open Access Journals (Sweden)

    Ahmed Bilal

    Full Text Available Sediment-dynamics modeling is a useful tool for estimating a dam's lifespan and its cost-benefit analysis. Collecting real data for sediment-dynamics analysis from conventional field survey methods is both tedious and expensive. Therefore, for most rivers, the historical record of data is either missing or not very detailed. Available data and existing tools have much potential and may be used for qualitative prediction of future bathymetric change trend. This study shows that proxy approaches may be used to increase the spatiotemporal resolution of flow data, and hypothesize the river cross-sections and sediment data. Sediment-dynamics analysis of the reach of the Tenryu River upstream of Sakuma Dam in Japan was performed to predict its future bathymetric changes using a 1D numerical model (HEC-RAS. In this case study, only annually-averaged flow data and the river's longitudinal bed profile at 5-year intervals were available. Therefore, the other required data, including river cross-section and geometry and sediment inflow grain sizes, had to be hypothesized or assimilated indirectly. The model yielded a good qualitative agreement, with an R2 (coefficient of determination of 0.8 for the observed and simulated bed profiles. A predictive simulation demonstrated that the useful life of the dam would end after the year 2035 (±5 years, which is in conformity with initial detailed estimates. The study indicates that a sediment-dynamic analysis can be performed even with a limited amount of data. However, such studies may only assess the qualitative trends of sediment dynamics.

  11. Qualitative simulation of bathymetric changes due to reservoir sedimentation: A Japanese case study.

    Science.gov (United States)

    Bilal, Ahmed; Dai, Wenhong; Larson, Magnus; Beebo, Qaid Naamo; Xie, Qiancheng

    2017-01-01

    Sediment-dynamics modeling is a useful tool for estimating a dam's lifespan and its cost-benefit analysis. Collecting real data for sediment-dynamics analysis from conventional field survey methods is both tedious and expensive. Therefore, for most rivers, the historical record of data is either missing or not very detailed. Available data and existing tools have much potential and may be used for qualitative prediction of future bathymetric change trend. This study shows that proxy approaches may be used to increase the spatiotemporal resolution of flow data, and hypothesize the river cross-sections and sediment data. Sediment-dynamics analysis of the reach of the Tenryu River upstream of Sakuma Dam in Japan was performed to predict its future bathymetric changes using a 1D numerical model (HEC-RAS). In this case study, only annually-averaged flow data and the river's longitudinal bed profile at 5-year intervals were available. Therefore, the other required data, including river cross-section and geometry and sediment inflow grain sizes, had to be hypothesized or assimilated indirectly. The model yielded a good qualitative agreement, with an R2 (coefficient of determination) of 0.8 for the observed and simulated bed profiles. A predictive simulation demonstrated that the useful life of the dam would end after the year 2035 (±5 years), which is in conformity with initial detailed estimates. The study indicates that a sediment-dynamic analysis can be performed even with a limited amount of data. However, such studies may only assess the qualitative trends of sediment dynamics.

  12. The utility of bathymetric echo sounding data in modelling benthic impacts using NewDEPOMOD driven by an FVCOM model.

    Science.gov (United States)

    Rochford, Meghan; Black, Kenneth; Aleynik, Dmitry; Carpenter, Trevor

    2017-04-01

    The Scottish Environmental Protection Agency (SEPA) are currently implementing new regulations for consenting developments at new and pre-existing fish farms. Currently, a 15-day current record from multiple depths at one location near the site is required to run DEPOMOD, a depositional model used to determine the depositional footprint of waste material from fish farms, developed by Cromey et al. (2002). The present project involves modifying DEPOMOD to accept data from 3D hydrodynamic models to allow for a more accurate representation of the currents around the farms. Bathymetric data are key boundary conditions for accurate modelling of current velocity data. The aim of the project is to create a script that will use the outputs from FVCOM, a 3D hydrodynamic model developed by Chen et al. (2003), and input them into NewDEPOMOD (a new version of DEPOMOD with more accurately parameterised sediment transport processes) to determine the effect of a fish farm on the surrounding environment. This study compares current velocity data under two scenarios; the first, using interpolated bathymetric data, and the second using bathymetric data collected during a bathymetric echo sounding survey of the site. Theoretically, if the hydrodynamic model is of high enough resolution, the two scenarios should yield relatively similar results. However, the expected result is that the survey data will be of much higher resolution and therefore of better quality, producing more realistic velocity results. The improvement of bathymetric data will also improve sediment transport predictions in NewDEPOMOD. This work will determine the sensitivity of model predictions to bathymetric data accuracy at a range of sites with varying bathymetric complexity and thus give information on the potential costs and benefits of echo sounding survey data inputs. Chen, C., Liu, H. and Beardsley, R.C., 2003. An unstructured grid, finite-volume, three-dimensional, primitive equations ocean model

  13. Sensitivity and spin-up times of cohesive sediment transport models used to simulate bathymetric change: Chapter 31

    Science.gov (United States)

    Schoellhamer, D.H.; Ganju, N.K.; Mineart, P.R.; Lionberger, M.A.; Kusuda, T.; Yamanishi, H.; Spearman, J.; Gailani, J. Z.

    2008-01-01

    Bathymetric change in tidal environments is modulated by watershed sediment yield, hydrodynamic processes, benthic composition, and anthropogenic activities. These multiple forcings combine to complicate simple prediction of bathymetric change; therefore, numerical models are necessary to simulate sediment transport. Errors arise from these simulations, due to inaccurate initial conditions and model parameters. We investigated the response of bathymetric change to initial conditions and model parameters with a simplified zero-dimensional cohesive sediment transport model, a two-dimensional hydrodynamic/sediment transport model, and a tidally averaged box model. The zero-dimensional model consists of a well-mixed control volume subjected to a semidiurnal tide, with a cohesive sediment bed. Typical cohesive sediment parameters were utilized for both the bed and suspended sediment. The model was run until equilibrium in terms of bathymetric change was reached, where equilibrium is defined as less than the rate of sea level rise in San Francisco Bay (2.17 mm/year). Using this state as the initial condition, model parameters were perturbed 10% to favor deposition, and the model was resumed. Perturbed parameters included, but were not limited to, maximum tidal current, erosion rate constant, and critical shear stress for erosion. Bathymetric change was most sensitive to maximum tidal current, with a 10% perturbation resulting in an additional 1.4 m of deposition over 10 years. Re-establishing equilibrium in this model required 14 years. The next most sensitive parameter was the critical shear stress for erosion; when increased 10%, an additional 0.56 m of sediment was deposited and 13 years were required to re-establish equilibrium. The two-dimensional hydrodynamic/sediment transport model was calibrated to suspended-sediment concentration, and despite robust solution of hydrodynamic conditions it was unable to accurately hindcast bathymetric change. The tidally averaged

  14. Predicting the aquatic stage sustainability of a restored backwater channel combining in-situ and airborne remotely sensed bathymetric models.

    Science.gov (United States)

    Jérôme, Lejot; Jérémie, Riquier; Hervé, Piégay

    2014-05-01

    As other large river floodplain worldwide, the floodplain of the Rhône has been deeply altered by human activities and infrastructures over the last centuries both in term of structure and functioning. An ambitious restoration plan of selected by-passed reaches has been implemented since 1999, in order to improve their ecological conditions. One of the main action aimed to increase the aquatic areas in floodplain channels (i.e. secondary channels, backwaters, …). In practice, fine and/or coarse alluvium were dredged, either locally or over the entire cut-off channel length. Sometimes the upstream or downstream alluvial plugs were also removed to reconnect the restored feature to the main channel. Such operation aims to restore forms and associated habitats of biotic communities, which are no more created or maintained by the river itself. In this context, assessing the sustainability of such restoration actions is a major issue. In this study, we focus on 1 of the 24 floodplain channels which have been restored along the Rhône River since 1999, the Malourdie channel (Chautagne reach, France). A monitoring of the geomorphologic evolution of the channel has been conducted during a decade to assess the aquatic stage sustainability of this former fully isolated channel, which has been restored as a backwater in 2004. Two main types of measures were performed: (a) water depth and fine sediment thickness were surveyed with an auger every 10 m along the channel centerline in average every year and a half allowing to establish an exponential decay model of terrestrialization rates through time; (b) three airborne campaigns (2006, 2007, 2012) by Ultra Aerial Vehicle (UAV) provided images from which bathymetry were inferred in combination with observed field measures. Coupling field and airborne models allows us to simulate different states of terrestrialization at the scale of the whole restore feature (e.g. 2020/2030/2050). Raw results indicate that terrestrialization

  15. Evaluation of LiDAR-acquired bathymetric and topographic data accuracy in various hydrogeomorphic settings in the Deadwood and South Fork Boise Rivers, West-Central Idaho, 2007

    Science.gov (United States)

    Skinner, Kenneth D.

    2011-01-01

    High-quality elevation data in riverine environments are important for fisheries management applications and the accuracy of such data needs to be determined for its proper application. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging)-or EAARL-system was used to obtain topographic and bathymetric data along the Deadwood and South Fork Boise Rivers in west-central Idaho. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL surveys, real-time kinematic global positioning system surveys were made in three areas along each of the rivers to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived raster elevation values, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.134 to 0.347 m. Accuracies in the elevation values for the stream hydrogeomorphic settings had root mean square errors ranging from 0.251 to 0.782 m. The greater root mean square errors for the latter data are the result of complex hydrogeomorphic environments within the streams, such as submerged aquatic macrophytes and air bubble entrainment; and those along the banks, such as boulders, woody debris, and steep slopes. These complex environments reduce the accuracy of EAARL bathymetric and topographic measurements. Steep banks emphasize the horizontal location discrepancies between the EAARL and ground-survey data and may not be good representations of vertical accuracy. The EAARL point to ground-survey comparisons produced results with slightly higher but similar root mean square errors than those for the EAARL raster to ground-survey comparisons, emphasizing the minimized horizontal offset by using interpolated values from the raster dataset at the exact location of the ground-survey point as opposed to an actual EAARL point within a 1-meter distance. The

  16. Spatial features register: toward standardization of spatial features

    Science.gov (United States)

    Cascio, Janette

    1994-01-01

    As the need to share spatial data increases, more than agreement on a common format is needed to ensure that the data is meaningful to both the importer and the exporter. Effective data transfer also requires common definitions of spatial features. To achieve this, part 2 of the Spatial Data Transfer Standard (SDTS) provides a model for a spatial features data content specification and a glossary of features and attributes that fit this model. The model provides a foundation for standardizing spatial features. The glossary now contains only a limited subset of hydrographic and topographic features. For it to be useful, terms and definitions must be included for other categories, such as base cartographic, bathymetric, cadastral, cultural and demographic, geodetic, geologic, ground transportation, international boundaries, soils, vegetation, water, and wetlands, and the set of hydrographic and topographic features must be expanded. This paper will review the philosophy of the SDTS part 2 and the current plans for creating a national spatial features register as one mechanism for maintaining part 2.

  17. Secular bathymetric variations of the North Channel in the Changjiang (Yangtze) Estuary, China, 1880-2013: Causes and effects

    Science.gov (United States)

    Mei, Xuefei; Dai, Zhijun; Wei, Wen; Li, Weihua; Wang, Jie; Sheng, Hao

    2018-02-01

    As the interface between the fluvial upland system and the open coast, global estuaries are facing serious challenges owing to various anthropogenic activities, especially to the Changjiang Estuary. Since the establishment of the Three Gorges Dam (TGD), currently the world's largest hydraulic structure, and certain other local hydraulic engineering structures, the Changjiang Estuary has experienced severe bathymetric variations. It is urgent to analyze the estuarine morphological response to the basin-wide disturbance to enable a better management of estuarine environments. North Channel (NC), the largest anabranched estuary in the Changjiang Estuary, is the focus of this study. Based on the analysis of bathymetric data between 1880 and 2013 and related hydrological data, we developed the first study on the centennial bathymetric variations of the NC. It is found that the bathymetric changes of NC include two main modes, with the first mode representing 64% of the NC variability, which indicates observable deposition in the mouth bar and its outer side area (lower reach); the second mode representing 11% of the NC variability, which further demonstrates channel deepening along the inner side of the mouth bar (upper reach) during 1970-2013. Further, recent erosion observed along the inner side of the mouth bar is caused by riverine sediment decrease, especially in relation to TGD induced sediment trapping since 2003, while the deposition along the lower reach since 2003 can be explained by the landward sediment transport because of flood-tide force strengthen under the joint action of TGD induced seasonal flood discharge decrease and land reclamation induced lower reach narrowing. Generally, the upper and lower NC reach are respectively dominated by fluvial and tidal discharge, however, episodic extreme floods can completely alter the channel morphology by smoothing the entire channel. The results presented herein for the NC enrich our understanding of bathymetric

  18. Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images

    Directory of Open Access Journals (Sweden)

    Sivaramakrishnan Rajaraman

    2018-04-01

    Full Text Available Malaria is a blood disease caused by the Plasmodium parasites transmitted through the bite of female Anopheles mosquito. Microscopists commonly examine thick and thin blood smears to diagnose disease and compute parasitemia. However, their accuracy depends on smear quality and expertise in classifying and counting parasitized and uninfected cells. Such an examination could be arduous for large-scale diagnoses resulting in poor quality. State-of-the-art image-analysis based computer-aided diagnosis (CADx methods using machine learning (ML techniques, applied to microscopic images of the smears using hand-engineered features demand expertise in analyzing morphological, textural, and positional variations of the region of interest (ROI. In contrast, Convolutional Neural Networks (CNN, a class of deep learning (DL models promise highly scalable and superior results with end-to-end feature extraction and classification. Automated malaria screening using DL techniques could, therefore, serve as an effective diagnostic aid. In this study, we evaluate the performance of pre-trained CNN based DL models as feature extractors toward classifying parasitized and uninfected cells to aid in improved disease screening. We experimentally determine the optimal model layers for feature extraction from the underlying data. Statistical validation of the results demonstrates the use of pre-trained CNNs as a promising tool for feature extraction for this purpose.

  19. Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems

    Energy Technology Data Exchange (ETDEWEB)

    Raj, Sunny [Univ. of Central Florida, Orlando, FL (United States); Jha, Sumit Kumar [Univ. of Central Florida, Orlando, FL (United States); Pullum, Laura L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ramanathan, Arvind [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-05-01

    Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on the pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.

  20. Novel images extraction model using improved delay vector variance feature extraction and multi-kernel neural network for EEG detection and prediction.

    Science.gov (United States)

    Ge, Jing; Zhang, Guoping

    2015-01-01

    Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.

  1. Labyrinths, columns and cavities: new internal features of pollen grain walls in the Acanthaceae detected by FIB-SEM.

    Science.gov (United States)

    House, Alisoun; Balkwill, Kevin

    2016-03-01

    External pollen grain morphology has been widely used in the taxonomy and systematics of flowering plants, especially the Acanthaceae which are noted for pollen diversity. However internal pollen wall features have received far less attention due to the difficulty of examining the wall structure. Advancing technology in the field of microscopy has made it possible, with the use of a focused ion beam-scanning electron microscope (FIB-SEM), to view the structure of pollen grain walls in far greater detail and in three dimensions. In this study the wall structures of 13 species from the Acanthaceae were investigated for features of potential systematic relevance. FIB-SEM was applied to obtain precise cross sections of pollen grains at selected positions for examining the wall ultrastructure. Exploratory studies of the exine have thus far identified five basic structural types. The investigations also show that similar external pollen wall features may have a distinctly different internal structure. FIB-SEM studies have revealed diverse internal pollen wall features which may now be investigated for their systematic and functional significance.

  2. A Comparative Study of Multiple Object Detection Using Haar-Like Feature Selection and Local Binary Patterns in Several Platforms

    Directory of Open Access Journals (Sweden)

    Souhail Guennouni

    2015-01-01

    Full Text Available Object detection has been attracting much interest due to the wide spectrum of applications that use it. It has been driven by an increasing processing power available in software and hardware platforms. In this work we present a developed application for multiple objects detection based on OpenCV libraries. The complexity-related aspects that were considered in the object detection using cascade classifier are described. Furthermore, we discuss the profiling and porting of the application into an embedded platform and compare the results with those obtained on traditional platforms. The proposed application deals with real-time systems implementation and the results give a metric able to select where the cases of object detection applications may be more complex and where it may be simpler.

  3. Quantitative assessment of the influence of anatomic noise on the detection of subtle lung nodule in digital chest radiography using fractal-feature distance

    International Nuclear Information System (INIS)

    Imai, Kuniharu; Ikeda, Mitsuru; Enchi, Yukihiro; Niimi, Takanaga

    2008-01-01

    Purpose: To confirm whether or not the influence of anatomic noise on the detection of nodules in digital chest radiography can be evaluated by the fractal-feature distance. Materials and methods: We used the square images with and without a simulated nodule which were generated in our previous observer performance study; the simulated nodule was located on the upper margin of a rib, the inside of a rib, the lower margin of a rib, or the central region between two adjoining ribs. For the square chest images, fractal analysis was conducted using the virtual volume method. The fractal-feature distances between the considered and the reference images were calculated using the pseudo-fractal dimension and complexity, and the square images without the simulated nodule were employed as the reference images. We compared the fractal-feature distances with the observer's confidence level regarding the presence of a nodule in plain chest radiograph. Results: For all square chest images, the relationships between the length of the square boxes and the mean of the virtual volumes were linear on a log-log scale. For all types of the simulated nodules, the fractal-feature distance was the highest for the simulated nodules located on the central region between two adjoining ribs and was the lowest for those located in the inside of a rib. The fractal-feature distance showed a linear relation to an observer's confidence level. Conclusion: The fractal-feature distance would be useful for evaluating the influence of anatomic noise on the detection of nodules in digital chest radiography

  4. Detection of Double-Compressed H.264/AVC Video Incorporating the Features of the String of Data Bits and Skip Macroblocks

    Directory of Open Access Journals (Sweden)

    Heng Yao

    2017-12-01

    Full Text Available Today’s H.264/AVC coded videos have a high quality, high data-compression ratio. They also have a strong fault tolerance, better network adaptability, and have been widely applied on the Internet. With the popularity of powerful and easy-to-use video editing software, digital videos can be tampered with in various ways. Therefore, the double compression in the H.264/AVC video can be used as a first step in the study of video-tampering forensics. This paper proposes a simple, but effective, double-compression detection method that analyzes the periodic features of the string of data bits (SODBs and the skip macroblocks (S-MBs for all I-frames and P-frames in a double-compressed H.264/AVC video. For a given suspicious video, the SODBs and S-MBs are extracted for each frame. Both features are then incorporated to generate one enhanced feature to represent the periodic artifact of the double-compressed video. Finally, a time-domain analysis is conducted to detect the periodicity of the features. The primary Group of Pictures (GOP size is estimated based on an exhaustive strategy. The experimental results demonstrate the efficacy of the proposed method.

  5. Detection of the 3.4 micron emission feature in Comets P/Brorsen-Metcalf and Okazaki-Levy-Rudenko (1989r) and an observational summary

    International Nuclear Information System (INIS)

    Brooke, T.Y.; Tokunaga, A.T.; Knacke, R.F.

    1991-01-01

    The 3.4 micron emission feature due to cometary organics was detected in Comets P/Brorsen-Metcalf and Okazaki-Levy-Rudenko (1989r). Features-to-continuum ratios in these two comets were higher than those expected from the trend seen in other comets to date. Three micron spectra of eight comets are reviewed. The 3.4 micron band flux is better correlated with the water production rate than with the dust production rate in this sample of comets. High feature-to-continuum ratios in P/Brorsen-Metcalf and Okazaki-Levy-Rudenko can be explained by the low dust-to-gas ratios of these two comets. The observations to date are consistent with cometary organics being present in all comets (even those for which no 3.4 micron feature was evident) at comparable abundances with respect to water. The emission mechanism and absolute abundance of the organics are not well determined; either gas-phase fluorescence or thermal emission from hot grains is consistent with the heliocentric distance dependence of the 3.4 micron band flux. There is an overall similarity in the spectral profiles of the 3.4 micron feature in comets; however, there are some potentially significant differences in the details of the spectra. 30 refs

  6. Evaluation of wavelet spectral features in pathological detection and discrimination of yellow rust and powdery mildew in winter wheat with hyperspectral reflectance data

    Science.gov (United States)

    Shi, Yue; Huang, Wenjiang; Zhou, Xianfeng

    2017-04-01

    Hyperspectral absorption features are important indicators of characterizing plant biophysical variables for the automatic diagnosis of crop diseases. Continuous wavelet analysis has proven to be an advanced hyperspectral analysis technique for extracting absorption features; however, specific wavelet features (WFs) and their relationship with pathological characteristics induced by different infestations have rarely been summarized. The aim of this research is to determine the most sensitive WFs for identifying specific pathological lesions from yellow rust and powdery mildew in winter wheat, based on 314 hyperspectral samples measured in field experiments in China in 2002, 2003, 2005, and 2012. The resultant WFs could be used as proxies to capture the major spectral absorption features caused by infestation of yellow rust or powdery mildew. Multivariate regression analysis based on these WFs outperformed conventional spectral features in disease detection; meanwhile, a Fisher discrimination model exhibited considerable potential for generating separable clusters for each infestation. Optimal classification returned an overall accuracy of 91.9% with a Kappa of 0.89. This paper also emphasizes the WFs and their relationship with pathological characteristics in order to provide a foundation for the further application of this approach in monitoring winter wheat diseases at the regional scale.

  7. A composite method based on formal grammar and DNA structural features in detecting human polymerase II promoter region.

    Directory of Open Access Journals (Sweden)

    Sutapa Datta

    Full Text Available An important step in understanding gene regulation is to identify the promoter regions where the transcription factor binding takes place. Predicting a promoter region de novo has been a theoretical goal for many researchers for a long time. There exists a number of in silico methods to predict the promoter region de novo but most of these methods are still suffering from various shortcomings, a major one being the selection of appropriate features of promoter region distinguishing them from non-promoters. In this communication, we have proposed a new composite method that predicts promoter sequences based on the interrelationship between structural profiles of DNA and primary sequence elements of the promoter regions. We have shown that a Context Free Grammar (CFG can formalize the relationships between different primary sequence features and by utilizing the CFG, we demonstrate that an efficient parser can be constructed for extracting these relationships from DNA sequences to distinguish the true promoter sequences from non-promoter sequences. Along with CFG, we have extracted the structural features of the promoter region to improve upon the efficiency of our prediction system. Extensive experiments performed on different datasets reveals that our method is effective in predicting promoter sequences on a genome-wide scale and performs satisfactorily as compared to other promoter prediction techniques.

  8. A Composite Method Based on Formal Grammar and DNA Structural Features in Detecting Human Polymerase II Promoter Region

    Science.gov (United States)

    Datta, Sutapa; Mukhopadhyay, Subhasis

    2013-01-01

    An important step in understanding gene regulation is to identify the promoter regions where the transcription factor binding takes place. Predicting a promoter region de novo has been a theoretical goal for many researchers for a long time. There exists a number of in silico methods to predict the promoter region de novo but most of these methods are still suffering from various shortcomings, a major one being the selection of appropriate features of promoter region distinguishing them from non-promoters. In this communication, we have proposed a new composite method that predicts promoter sequences based on the interrelationship between structural profiles of DNA and primary sequence elements of the promoter regions. We have shown that a Context Free Grammar (CFG) can formalize the relationships between different primary sequence features and by utilizing the CFG, we demonstrate that an efficient parser can be constructed for extracting these relationships from DNA sequences to distinguish the true promoter sequences from non-promoter sequences. Along with CFG, we have extracted the structural features of the promoter region to improve upon the efficiency of our prediction system. Extensive experiments performed on different datasets reveals that our method is effective in predicting promoter sequences on a genome-wide scale and performs satisfactorily as compared to other promoter prediction techniques. PMID:23437045

  9. A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images.

    Science.gov (United States)

    Acharya, U Rajendra; Bhat, Shreya; Koh, Joel E W; Bhandary, Sulatha V; Adeli, Hojjat

    2017-09-01

    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The conv