WorldWideScience

Sample records for bathymetric features detected

  1. Variability In Long-Wave Runup as a Function of Nearshore Bathymetric Features

    Energy Technology Data Exchange (ETDEWEB)

    Dunkin, Lauren McNeill [Texas A & M Univ., College Station, TX (United States)

    2010-05-01

    Beaches and barrier islands are vulnerable to extreme storm events, such as hurricanes, that can cause severe erosion and overwash to the system. Having dunes and a wide beach in front of coastal infrastructure can provide protection during a storm, but the influence that nearshore bathymetric features have in protecting the beach and barrier island system is not completely understood. The spatial variation in nearshore features, such as sand bars and beach cusps, can alter nearshore hydrodynamics, including wave setup and runup. The influence of bathymetric features on long-wave runup can be used in evaluating the vulnerability of coastal regions to erosion and dune overtopping, evaluating the changing morphology, and implementing plans to protect infrastructure. In this thesis, long-wave runup variation due to changing bathymetric features as determined with the numerical model XBeach is quantified (eXtreme Beach behavior model). Wave heights are analyzed to determine the energy through the surfzone. XBeach assumes that coastal erosion at the land-sea interface is dominated by bound long-wave processes. Several hydrodynamic conditions are used to force the numerical model. The XBeach simulation results suggest that bathymetric irregularity induces significant changes in the extreme long-wave runup at the beach and the energy indicator through the surfzone.

  2. Integrating bathymetric and topographic data

    Science.gov (United States)

    Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat

    2017-11-01

    The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.

  3. Subducted bathymetric features linked to variations in earthquake apparent stress along the northern Japan Trench

    Science.gov (United States)

    Moyer, P. A.; Bilek, S. L.; Phillips, W. S.

    2010-12-01

    Ocean floor bathymetric features such as seamounts and ridges are thought to influence the earthquake rupture process when they enter the subduction zone by causing changes in frictional conditions along the megathrust contact between the subducting and overriding plates. Once subducted, these features have been described as localized areas of heterogeneous plate coupling, with some controversy over whether these features cause an increase or decrease in interplate coupling. Along the northern Japan Trench, a number of bathymetric features, such as horst and graben structures and seamounts, enter the subduction zone where they may vary earthquake behavior. Using seismic coda waves, scattered energy following the direct wave arrivals, we compute apparent stress (a measure of stress drop proportional to radiated seismic energy that has been tied to the strength of the fault interface contact) for 329 intermediate magnitude (3.2 earthquake spectra for path and site effects and compute apparent stress using the seismic moment and corner frequency determined from the spectra. Preliminary results indicate apparent stress values between 0.3 - 22.6 MPa for events over a depth range of 2 - 55 km, similar to those found in other studies of the region although within a different depth range, with variations both along-strike and downdip. Off the Sanriku Coast, horst and graben structures enter the Japan Trench in an area where a large number of earthquakes occur at shallow (< 30 km) depth. These shallow events have a mean apparent stress of 1.2 MPa (range 0.3 - 3.8 MPa) which is approximately 2 times lower then the mean apparent stress for other events along the northern portion of this margin in the same shallow depth range. The relatively low apparent stress for events related to subducting horst and graben structures suggests weak interplate coupling between the subducting and overriding plates due to small, irregular contact zones with these features at depth. This is in

  4. Quantification of storm-induced bathymetric change in a back-barrier estuary

    Science.gov (United States)

    Ganju, Neil K.; Suttles, Steven E.; Beudin, Alexis; Nowacki, Daniel J.; Miselis, Jennifer L.; Andrews, Brian D.

    2017-01-01

    Geomorphology is a fundamental control on ecological and economic function of estuaries. However, relative to open coasts, there has been little quantification of storm-induced bathymetric change in back-barrier estuaries. Vessel-based and airborne bathymetric mapping can cover large areas quickly, but change detection is difficult because measurement errors can be larger than the actual changes over the storm timescale. We quantified storm-induced bathymetric changes at several locations in Chincoteague Bay, Maryland/Virginia, over the August 2014 to July 2015 period using fixed, downward-looking altimeters and numerical modeling. At sand-dominated shoal sites, measurements showed storm-induced changes on the order of 5 cm, with variability related to stress magnitude and wind direction. Numerical modeling indicates that the predominantly northeasterly wind direction in the fall and winter promotes southwest-directed sediment transport, causing erosion of the northern face of sandy shoals; southwesterly winds in the spring and summer lead to the opposite trend. Our results suggest that storm-induced estuarine bathymetric change magnitudes are often smaller than those detectable with methods such as LiDAR. More precise fixed-sensor methods have the ability to elucidate the geomorphic processes responsible for modulating estuarine bathymetry on the event and seasonal timescale, but are limited spatially. Numerical modeling enables interpretation of broad-scale geomorphic processes and can be used to infer the long-term trajectory of estuarine bathymetric change due to episodic events, when informed by fixed-sensor methods.

  5. A general method for generating bathymetric data for hydrodynamic computer models

    Science.gov (United States)

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)

  6. Degree of anisotropy as an automated indicator of rip channels in high resolution bathymetric models

    Science.gov (United States)

    Trimble, S. M.; Houser, C.; Bishop, M. P.

    2017-12-01

    A rip current is a concentrated seaward flow of water that forms in the surf zone of a beach as a result of alongshore variations in wave breaking. Rips can carry swimmers swiftly into deep water, and they are responsible for hundreds of fatal drownings and thousands of rescues worldwide each year. These currents form regularly alongside hard structures like piers and jetties, and can also form along sandy coasts when there is a three dimensional bar morphology. This latter rip type tends to be variable in strength and location, making them arguably the most dangerous to swimmers and most difficult to identify. These currents form in characteristic rip channels in surf zone bathymetry, in which the primary axis of self-similarity is oriented shore-normal. This paper demonstrates a new method for automating identification of such rip channels in bathymetric digital surface models (DSMs) using bathymetric data collected by various remote sensing methods. Degree of anisotropy is used to detect rip channels and distinguishes between sandbars, rip channels, and other beach features. This has implications for coastal geomorphology theory and safety practices. As technological advances increase access and accuracy of topobathy mapping methods in the surf zone, frequent nearshore bathymetric DSMs could be more easily captured and processed, then analyzed with this method to result in localized, automated, and frequent detection of rip channels. This could ultimately reduce rip-related fatalities worldwide (i) in present mitigation, by identifying the present location of rip channels, (ii) in forecasting, by tracking the channel's evolution through multiple DSMs, and (iii) in rip education by improving local lifeguard knowledge of the rip hazard. Although this paper on applies analysis of degree of anisotropy to the identification of rip channels, this parameter can be applied to multiple facets of barrier island morphological analysis.

  7. NOS Bathymetric Maps

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This collection of bathymetric contour maps which represent the seafloor topography includes over 400 individual titles and covers US offshore areas including Hawaii...

  8. Unravel Spurious Bathymetric Highs on the Western Continental Margin of India

    Science.gov (United States)

    Mahale, V. P.

    2017-12-01

    Swath mapping multibeam echosounder systems (MBES) have become a de-facto-standard component on today's research vessel (RV). Modern MBES provide high temporal and spatial resolution for mapping the seabed morphology. Improved resolution capabilities requires large hull mounted transceivers, which after installation undergoes calibration procedure during the sea acceptance test (SAT). To accurately estimate various vessel offsets and lever-arm corrections, the installer runs calibration lines over a prominent seabed feature. In the year 2014, while conducting SAT for the RV Sindhu Sadhana and calibrate the ATLAS make MBES system, a hunt was on to find suitable bathymetric highs in the region of operation. Regional hydrographic charts published by the National Hydrographic Office, in India were referred to locate such features. Two bathymetric highs were spotted on the chart that are 20 km apart and 40 km west of the shelf-edge on the Western Continental Margin of India. The charted depth on these highs are 252 m and 343 m on a relatively even but moderately sloppy seabed, representing an isolated elevations of 900 m. The geographic locations of these knolls were verified with the GEBCO's 30-arc second gridded bathymetry, before heading out for the waypoints. There were no signs of knolls at those locations, indicating erroneous georeferencing. Hence, the region was subsequently revisited in the following years until an area of 3000 sq. km was mapped. Failing to locate the bathymetric highs they are referred to as 'spurious'. Investigation was planned to unravel the rationale of existence and sustenance of these knolls in the hydrographic charts since historic time. Tweaking the MBES settings reveals existence of strong acoustic scattering layer, to which even the depth tracking gate gets locked-on and is documented. Analogically, in the past, ships transecting the region equipped with single beam echosounder tuned for shallow depth operations might have charted the

  9. Massachusetts Bay - Internal wave packets digitized from SAR imagery and intersected with a bathymetrically derived slope surface

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This feature class contains internal wave packets digitized from SAR imagery and intersected with a bathymetrically derived slope surface for Massachusetts Bay. The...

  10. Synoptic channel morphodynamics with topo-bathymetric airborne lidar: promises, pitfalls and research needs

    Science.gov (United States)

    Lague, D.; Launeau, P.; Gouraud, E.

    2017-12-01

    Topo-bathymetric airborne lidar sensors using a green laser penetrating water and suitable for hydrography are now sold by major manufacturers. In the context of channel morphodynamics, repeat surveys could offer synoptic high resolution measurement of topo-bathymetric change, a key data that is currently missing. Yet, beyond the technological promise, what can we really achieve with these sensors in terms of depth penetration and bathymetric accuracy ? Can all rivers be surveyed ? How easy it is to process this new type of data to get the data needed by geomorphologists ? Here we report on the use of the Optech Titan dual wavelength (1064 nm & 532 nm) operated by the universities of Rennes and Nantes (France) and deployed over several rivers and lakes in France, including repeat surveys. We will illustrate cases where the topo-bathymetric survey is complete, reaching up to 6 m in rivers and offers unprecedented data for channel morphology analysis over tens of kilometres. We will also present challenging cases for which the technology will never work, or for which new algorithms to process full waveform are required. We will illustrate new developments for automated processing of large datasets, including the critical step of water surface detection and refraction correction. In suitable rivers, airborne topo-bathymetric surveys offer unprecedented synoptic 3D data at very high resolution (> 15 pts/m² in bathy) and precision (better than 10 cm for the bathy) down to 5-6 meters depth, with a perfectly continuous topography to bathymetry transition. This presentation will illustrate how this new type of data, when combined with 2D hydraulics modelling offers news insights into the spatial variations of friction in relation to channel bedforms, and the connectivity between rivers and floodplains.

  11. Bathymetric surveys of the Neosho River, Spring River, and Elk River, northeastern Oklahoma and southwestern Missouri, 2016–17

    Science.gov (United States)

    Hunter, Shelby L.; Ashworth, Chad E.; Smith, S. Jerrod

    2017-09-26

    In February 2017, the Grand River Dam Authority filed to relicense the Pensacola Hydroelectric Project with the Federal Energy Regulatory Commission. The predominant feature of the Pensacola Hydroelectric Project is Pensacola Dam, which impounds Grand Lake O’ the Cherokees (locally called Grand Lake) in northeastern Oklahoma. Identification of information gaps and assessment of project effects on stakeholders are central aspects of the Federal Energy Regulatory Commission relicensing process. Some upstream stakeholders have expressed concerns about the dynamics of sedimentation and flood flows in the transition zone between major rivers and Grand Lake O’ the Cherokees. To relicense the Pensacola Hydroelectric Project with the Federal Energy Regulatory Commission, the hydraulic models for these rivers require high-resolution bathymetric data along the river channels. In support of the Federal Energy Regulatory Commission relicensing process, the U.S. Geological Survey, in cooperation with the Grand River Dam Authority, performed bathymetric surveys of (1) the Neosho River from the Oklahoma border to the U.S. Highway 60 bridge at Twin Bridges State Park, (2) the Spring River from the Oklahoma border to the U.S. Highway 60 bridge at Twin Bridges State Park, and (3) the Elk River from Noel, Missouri, to the Oklahoma State Highway 10 bridge near Grove, Oklahoma. The Neosho River and Spring River bathymetric surveys were performed from October 26 to December 14, 2016; the Elk River bathymetric survey was performed from February 27 to March 21, 2017. Only areas inundated during those periods were surveyed.The bathymetric surveys covered a total distance of about 76 river miles and a total area of about 5 square miles. Greater than 1.4 million bathymetric-survey data points were used in the computation and interpolation of bathymetric-survey digital elevation models and derived contours at 1-foot (ft) intervals. The minimum bathymetric-survey elevation of the Neosho

  12. Mariana Trench Bathymetric Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) created a bathymetric digital elevation model (DEM) for the Mariana Trench and adjacent seafloor in the Western...

  13. Bathymetric survey and estimation of the water balance of Lake ...

    African Journals Online (AJOL)

    Quantification of the water balance components and bathymetric survey is very crucial for sustainable management of lake waters. This paper focuses on the bathymetry and the water balance of the crater Lake Ardibo, recently utilized for irrigation. The bathymetric map of the lake is established at a contour interval of 10 ...

  14. 2011 NOAA Bathymetric Lidar: U.S. Virgin Islands - St. Thomas, St. John, St. Croix (Salt River Bay, Buck Island)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data represents a LiDAR (Light Detection & Ranging) gridded bathymetric surface and a gridded relative seafloor reflectivity surface (incorporated into the...

  15. The effect of bathymetric filtering on nearshore process model results

    Science.gov (United States)

    Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.

    2009-01-01

    Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.

  16. Tampa Bay Topographic/Bathymetric Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — In this joint demonstration project for the Tampa Bay region, NOAA's National Ocean Service (NOS) and the U.S. Geological Survey (USGS) have merged NOAA bathymetric...

  17. Features for detecting smoke in laparoscopic videos

    Directory of Open Access Journals (Sweden)

    Jalal Nour Aldeen

    2017-09-01

    Full Text Available Video-based smoke detection in laparoscopic surgery has different potential applications, such as the automatic addressing of surgical events associated with the electrocauterization task and the development of automatic smoke removal. In the literature, video-based smoke detection has been studied widely for fire surveillance systems. Nevertheless, the proposed methods are insufficient for smoke detection in laparoscopic videos because they often depend on assumptions which rarely hold in laparoscopic surgery such as static camera. In this paper, ten visual features based on motion, texture and colour of smoke are proposed and evaluated for smoke detection in laparoscopic videos. These features are RGB channels, energy-based feature, texture features based on gray level co-occurrence matrix (GLCM, HSV colour space feature, features based on the detection of moving regions using optical flow and the smoke colour in HSV colour space. These features were tested on four laparoscopic cholecystectomy videos. Experimental observations show that each feature can provide valuable information in performing the smoke detection task. However, each feature has weaknesses to detect the presence of smoke in some cases. By combining all proposed features smoke with high and even low density can be identified robustly and the classification accuracy increases significantly.

  18. Bathymetric terrain model of the Atlantic margin for marine geological investigations

    Science.gov (United States)

    Andrews, Brian D.; Chaytor, Jason D.; ten Brink, Uri S.; Brothers, Daniel S.; Gardner, James V.; Lobecker, Elizabeth A.; Calder, Brian R.

    2016-01-01

    A bathymetric terrain model of the Atlantic margin covering almost 725,000 square kilometers of seafloor from the New England Seamounts in the north to the Blake Basin in the south is compiled from existing multibeam bathymetric data for marine geological investigations. Although other terrain models of the same area are extant, they are produced from either satellite-derived bathymetry at coarse resolution (ETOPO1), or use older bathymetric data collected by using a combination of single beam and multibeam sonars (Coastal Relief Model). The new multibeam data used to produce this terrain model have been edited by using hydrographic data processing software to maximize the quality, usability, and cartographic presentation of the combined 100-meter resolution grid. The final grid provides the largest high-resolution, seamless terrain model of the Atlantic margin..

  19. Bathymetric Signatures of Oceanic Detachment Faulting and Potential Ultramafic Lithologies at Outcrop or in the Shallow Subseafloor

    Science.gov (United States)

    Cann, J. R.; Smith, D. K.; Escartin, J.; Schouten, H.

    2008-12-01

    For ten years, domal bathymetric features capped by corrugated and striated surfaces have been recognized as exposures of oceanic detachment faults, and hence potentially as exposures of plutonic rocks from lower crust or upper mantle. Associated with these domes are other bathymetric features that indicate the presence of detachment faulting. Taken together these bathymetric signatures allow the mapping of large areas of detachment faulting at slow and intermediate spreading ridges, both at the axis and away from it. These features are: 1. Smooth elevated domes corrugated parallel to the spreading direction, typically 10-30 km wide parallel to the axis; 2. Linear ridges with outward-facing slopes steeper than 20°, running parallel to the spreading axis, typically 10-30 km long; 3. Deep basins with steep sides and relatively flat floors, typically 10-20 km long parallel to the spreading axis and 5-10 km wide. This characteristic bathymetric association arises from the rolling over of long-lived detachment faults as they spread away from the axis. The faults dip steeply close to their origin at a few kilometers depth near the spreading axis, and rotate to shallow dips as they continue to evolve, with associated footwall flexure and rotation of rider blocks carried on the fault surface. The outward slopes of the linear ridges can be shown to be rotated volcanic seafloor transported from the median valley floor. The basins may be formed by the footwall flexure, and may be exposures of the detachment surface. Critical in this analysis is that the corrugated domes are not the only sites of detachment faulting, but are the places where higher parts of much more extensive detachment faults happen to be exposed. The fault plane rises and falls along axis, and in some places is covered by rider blocks, while in others it is exposed at the sea floor. We use this association to search for evidence for detachment faulting in existing surveys, identifying for example an area

  20. The use of bathymetric data in society and science: a review from the Baltic Sea.

    Science.gov (United States)

    Hell, Benjamin; Broman, Barry; Jakobsson, Lars; Jakobsson, Martin; Magnusson, Ake; Wiberg, Patrik

    2012-03-01

    Bathymetry, the underwater topography, is a fundamental property of oceans, seas, and lakes. As such it is important for a wide range of applications, like physical oceanography, marine geology, geophysics and biology or the administration of marine resources. The exact requirements users may have regarding bathymetric data are, however, unclear. Here, the results of a questionnaire survey and a literature review are presented, concerning the use of Baltic Sea bathymetric data in research and for societal needs. It is demonstrated that there is a great need for detailed bathymetric data. Despite the abundance of high-quality bathymetric data that are produced for safety of navigation purposes, the digital bathymetric models publicly available to date cannot satisfy this need. Our study shows that DBMs based on data collected for safety of navigation could substantially improve the base data for administrative decision making as well as the possibilities for marine research in the Baltic Sea.

  1. Hindcasting of decadal‐timescale estuarine bathymetric change with a tidal‐timescale model

    Science.gov (United States)

    Ganju, Neil K.; Schoellhamer, David H.; Jaffe, Bruce E.

    2009-01-01

    Hindcasting decadal-timescale bathymetric change in estuaries is prone to error due to limited data for initial conditions, boundary forcing, and calibration; computational limitations further hinder efforts. We developed and calibrated a tidal-timescale model to bathymetric change in Suisun Bay, California, over the 1867–1887 period. A general, multiple-timescale calibration ensured robustness over all timescales; two input reduction methods, the morphological hydrograph and the morphological acceleration factor, were applied at the decadal timescale. The model was calibrated to net bathymetric change in the entire basin; average error for bathymetric change over individual depth ranges was 37%. On a model cell-by-cell basis, performance for spatial amplitude correlation was poor over the majority of the domain, though spatial phase correlation was better, with 61% of the domain correctly indicated as erosional or depositional. Poor agreement was likely caused by the specification of initial bed composition, which was unknown during the 1867–1887 period. Cross-sectional bathymetric change between channels and flats, driven primarily by wind wave resuspension, was modeled with higher skill than longitudinal change, which is driven in part by gravitational circulation. The accelerated response of depth may have prevented gravitational circulation from being represented properly. As performance criteria became more stringent in a spatial sense, the error of the model increased. While these methods are useful for estimating basin-scale sedimentation changes, they may not be suitable for predicting specific locations of erosion or deposition. They do, however, provide a foundation for realistic estuarine geomorphic modeling applications.

  2. A new method for weakening the combined effect of residual errors on multibeam bathymetric data

    Science.gov (United States)

    Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue

    2014-12-01

    Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.

  3. Processing and evaluation of riverine waveforms acquired by an experimental bathymetric LiDAR

    Science.gov (United States)

    Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.

    2010-12-01

    Accurate mapping of fluvial environments with airborne bathymetric LiDAR is challenged not only by environmental characteristics but also the development and application of software routines to post-process the recorded laser waveforms. During a bathymetric LiDAR survey, the transmission of the green-wavelength laser pulses through the water column is influenced by a number of factors including turbidity, the presence of organic material, and the reflectivity of the streambed. For backscattered laser pulses returned from the river bottom and digitized by the LiDAR detector, post-processing software is needed to interpret and identify distinct inflections in the reflected waveform. Relevant features of this energy signal include the air-water interface, volume reflection from the water column itself, and, ideally, a strong return from the bottom. We discuss our efforts to acquire, analyze, and interpret riverine surveys using the USGS Experimental Advanced Airborne Research LiDAR (EAARL) in a variety of fluvial environments. Initial processing of data collected in the Trinity River, California, using the EAARL Airborne Lidar Processing Software (ALPS) highlighted the difficulty of retrieving a distinct bottom signal in deep pools. Examination of laser waveforms from these pools indicated that weak bottom reflections were often neglected by a trailing edge algorithm used by ALPS to process shallow riverine waveforms. For the Trinity waveforms, this algorithm had a tendency to identify earlier inflections as the bottom, resulting in a shallow bias. Similarly, an EAARL survey along the upper Colorado River, Colorado, also revealed the inadequacy of the trailing edge algorithm for detecting weak bottom reflections. We developed an alternative waveform processing routine by exporting digitized laser waveforms from ALPS, computing the local extrema, and fitting Gaussian curves to the convolved backscatter. Our field data indicate that these techniques improved the

  4. Detailed bathymetric surveys in the central Indian Basin

    Digital Repository Service at National Institute of Oceanography (India)

    Kodagali, V.N.; KameshRaju, K.A.; Ramprasad, T.; George, P.; Jaisankar, S.

    Over 420,000 line kilometers of echo-sounding data was collected in the Central Indian Basin. This data was digitized, merged with navigation data and a detailed bathymetric map of the Basin was prepared. The Basin can be broadly classified...

  5. Bathymetric Contour Maps of Lakes Surveyed in Iowa in 2005

    Science.gov (United States)

    Linhart, S.M.; Lund, K.D.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Iowa Department of Natural Resources, conducted bathymetric surveys on seven lakes in Iowa during 2005 (Arrowhead Pond, Central Park Lake, Lake Keomah, Manteno Park Pond, Lake Miami, Springbrook Lake, and Yellow Smoke Lake). The surveys were conducted to provide the Iowa Department of Natural Resources with information for the development of total maximum daily load limits, particularly for estimating sediment load and deposition rates. The bathymetric surveys provide a baseline for future work on sediment loads and deposition rates for these lakes. All of the lakes surveyed in 2005 are man-made lakes with fixed spillways. Bathymetric data were collected using boat-mounted, differential global positioning system, echo depth-sounding equipment, and computer software. Data were processed with commercial hydrographic software and exported into a geographic information system for mapping and calculating area and volume. Lake volume estimates ranged from 47,784,000 cubic feet (1,100 acre-feet) at Lake Miami to 2,595,000 cubic feet (60 acre-feet) at Manteno Park Pond. Surface area estimates ranged from 5,454,000 square feet (125 acres) at Lake Miami to 558,000 square feet (13 acres) at Springbrook Lake.

  6. Submerged karst landforms observed by multibeam bathymetric survey in Nagura Bay, Ishigaki Island, southwestern Japan

    Science.gov (United States)

    Kan, Hironobu; Urata, Kensaku; Nagao, Masayuki; Hori, Nobuyuki; Fujita, Kazuhiko; Yokoyama, Yusuke; Nakashima, Yosuke; Ohashi, Tomoya; Goto, Kazuhisa; Suzuki, Atsushi

    2015-01-01

    Submerged tropical karst features were discovered in Nagura Bay on Ishigaki Island in the southern Ryukyu Islands, Japan. The coastal seafloor at depths shallower than ~ 130 m has been subjected to repeated and alternating subaerial erosion and sedimentation during periods of Quaternary sea-level lowstands. We conducted a broadband multibeam survey in the central area of Nagura Bay (1.85 × 2.7 km) and visualized the high-resolution bathymetric results over a depth range of 1.6-58.5 m. Various types of humid tropical karst landforms were found to coexist within the bay, including fluviokarst, doline karst, cockpit karst, polygonal karst, uvalas, and mega-dolines. Although these submerged karst landforms are covered by thick postglacial reef and reef sediments, their shapes and sizes are distinct from those associated with coral reef geomorphology. The submerged landscape of Nagura Bay likely formed during multiple glacial and interglacial periods. According to our bathymetric results and the aerial photographs of the coastal area, this submerged karst landscape appears to have developed throughout Nagura Bay (i.e., over an area of approximately 6 × 5 km) and represents the largest submerged karst in Japan.

  7. Prediction of topographic and bathymetric measurement performance of airborne low-SNR lidar systems

    Science.gov (United States)

    Cossio, Tristan

    Low signal-to-noise ratio (LSNR) lidar (light detection and ranging) is an alternative paradigm to traditional lidar based on the detection of return signals at the single photoelectron level. The objective of this work was to predict low altitude (600 m) LSNR lidar system performance with regards to elevation measurement and target detection capability in topographic (dry land) and bathymetric (shallow water) scenarios. A modular numerical sensor model has been developed to provide data for further analysis due to the dearth of operational low altitude LSNR lidar systems. This simulator tool is described in detail, with consideration given to atmospheric effects, surface conditions, and the effects of laser phenomenology. Measurement performance analysis of the simulated topographic data showed results comparable to commercially available lidar systems, with a standard deviation of less than 12 cm for calculated elevation values. Bathymetric results, although dependent largely on water turbidity, were indicative of meter-scale horizontal data spacing for sea depths less than 5 m. The high prevalence of noise in LSNR lidar data introduces significant difficulties in data analysis. Novel algorithms to reduce noise are described, with particular focus on their integration into an end-to-end target detection classifier for both dry and submerged targets (cube blocks, 0.5 m to 1.0 m on a side). The key characteristic exploited to discriminate signal and noise is the temporal coherence of signal events versus the random distribution of noise events. Target detection performance over dry earth was observed to be robust, reliably detecting over 90% of targets with a minimal false alarm rate. Comparable results were observed in waters of high clarity, where the investigated system was generally able to detect more than 70% of targets to a depth of 5 m. The results of the study show that CATS, the University of Florida's LSNR lidar prototype, is capable of high fidelity

  8. A new bathymetric survey of the Suwałki Landscape Park lakes

    Directory of Open Access Journals (Sweden)

    Borowiak Dariusz

    2016-12-01

    Full Text Available The results of the latest bathymetric survey of 21 lakes in the Suwałki Landscape Park (SLP are presented here. Measurements of the underwater lake topography were carried out in the years 2012–2013 using the hydroacoustic method (sonar Lawrence 480M. In the case of four lakes (Błędne, Pogorzałek, Purwin, Wodziłki this was the first time a bathymetric survey had been performed. Field material was used to prepare bathymetric maps, which were then used for calculating the basic size and shape parameters of the lake basins. The results of the studies are shown against the nearly 90 year history of bathymetric surveying of the SLP lakes. In the light of the current measurements, the total area of the SLP lakes is over 634 hm2 and its limnic ratio is 10%. Lake water resources in the park were estimated at 143 037.1 dam3. This value corresponds to a retention index of 2257 mm. In addition, studies have shown that the previous morphometric data are not very accurate. The relative differences in the lake surface areas ranged from –14.1 to 9.1%, and in the case of volume – from –32.2 to 35.3%. The greatest differences in the volume, expressed in absolute values, were found in the largest SLP lakes: Hańcza (1716.1 dam3, Szurpiły (1282.0 dam3, Jaczno (816.4 dam3, Perty (427.1 dam3, Jegłówek (391.2 dam3 and Kojle (286.2 dam3. The smallest disparities were observed with respect to the data obtained by the IRS (Inland Fisheries Institute in Olsztyn. The IMGW (Institute of Meteorology and Water Management bathymetric measurements were affected by some significant errors, and morphometric parameters determined on their basis are only approximate.

  9. Patch layout generation by detecting feature networks

    KAUST Repository

    Cao, Yuanhao

    2015-02-01

    The patch layout of 3D surfaces reveals the high-level geometric and topological structures. In this paper, we study the patch layout computation by detecting and enclosing feature loops on surfaces. We present a hybrid framework which combines several key ingredients, including feature detection, feature filtering, feature curve extension, patch subdivision and boundary smoothing. Our framework is able to compute patch layouts through concave features as previous approaches, but also able to generate nice layouts through smoothing regions. We demonstrate the effectiveness of our framework by comparing with the state-of-the-art methods.

  10. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    Science.gov (United States)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2015-08-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar HydroStar 4300 and GPS devices; a Ashtech ProMark 500 base, and a Thales Z-Max® rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: (a) to compare the efficiency of 14 different interpolation methods and discover the most appropriate interpolators for the development of a raster model; (b) to calculate the surface area and volume of Lake Vrana, and (c) to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was multiquadric RBF (radio basis function), and the best geostatistical method was ordinary cokriging. The root mean square error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in two phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  11. Innovative High-Accuracy Lidar Bathymetric Technique for the Frequent Measurement of River Systems

    Science.gov (United States)

    Gisler, A.; Crowley, G.; Thayer, J. P.; Thompson, G. S.; Barton-Grimley, R. A.

    2015-12-01

    Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for understanding how rivers evolve over many timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.

  12. MORPHO-BATHYMETRIC PARAMETERS OF RECESS CRUCII LAKE (STÂNIŞOAREI MOUNTAINS

    Directory of Open Access Journals (Sweden)

    ALIN MIHU-PINTILIE

    2012-03-01

    Full Text Available Morpho-bathymetric parameters of recess Crucii Lake (Stânişoarei Mountains. Crucii Lake from Stânişoarei Mountains was formed in 1978 as a result of riverbed dam Cuejdel after a landslide triggered on the western slope of Muncelul Peak. The event led initially to a small accumulation of 250-300 acvatoriu m, 25-30 m wide and 4-5 m maximum depth. In the summer of 1991 following the construction of a forest road in the flysch, and amid a high humid conditions, the slide was reactivated, leading to the formation of the largest natural dam lake in Romania. It has a length of 1 km, area of 12.2 ha, maximum depth of 16 m and a water volume of ca. 907.000 m3. Morphometric and morpho-bathymetric measurements performed in the summer of 2011, with the help of the integrated 1.200 GPS of Station Leica System 1.200 surveying measurements and bathymetric measurements Valeyport Ecosounder Midas showed new values for the morpho-bathymetric parameters. Among them stands out: 13,95 ha area, perimeter 2801,1 m, maximum length of 1004,82 m, 282,6 m maximum width, maximum depth 16,45 m. To achieve the numerical model of the lake basin were more than 45.000 points bali reading, with equidistance of 0,25 m. The scale of detail work aimed to draw up a proper database to eliminate suspicions about the old analytical methods inaccuracies. At the same time was studied the evolution of the lake’s basin in the context of relatively recent geomorphological changes.

  13. Modeling and Analysis of Integrated Bathymetric and Geodetic Data for Inventory Surveys of Mining Water Reservoirs

    Science.gov (United States)

    Ochałek, Agnieszka; Lipecki, Tomasz; Jaśkowski, Wojciech; Jabłoński, Mateusz

    2018-03-01

    The significant part of the hydrography is bathymetry, which is the empirical part of it. Bathymetry is the study of underwater depth of waterways and reservoirs, and graphic presentation of measured data in form of bathymetric maps, cross-sections and three-dimensional bottom models. The bathymetric measurements are based on using Global Positioning System and devices for hydrographic measurements - an echo sounder and a side sonar scanner. In this research authors focused on introducing the case of obtaining and processing the bathymetrical data, building numerical bottom models of two post-mining reclaimed water reservoirs: Dwudniaki Lake in Wierzchosławice and flooded quarry in Zabierzów. The report includes also analysing data from still operating mining water reservoirs located in Poland to depict how bathymetry can be used in mining industry. The significant issue is an integration of bathymetrical data and geodetic data from tachymetry, terrestrial laser scanning measurements.

  14. Predicting species diversity of benthic communities within turbid nearshore using full-waveform bathymetric LiDAR and machine learners.

    Directory of Open Access Journals (Sweden)

    Antoine Collin

    Full Text Available Epi-macrobenthic species richness, abundance and composition are linked with type, assemblage and structural complexity of seabed habitat within coastal ecosystems. However, the evaluation of these habitats is highly hindered by limitations related to both waterborne surveys (slow acquisition, shallow water and low reactivity and water clarity (turbid for most coastal areas. Substratum type/diversity and bathymetric features were elucidated using a supervised method applied to airborne bathymetric LiDAR waveforms over Saint-Siméon-Bonaventure's nearshore area (Gulf of Saint-Lawrence, Québec, Canada. High-resolution underwater photographs were taken at three hundred stations across an 8-km(2 study area. Seven models based upon state-of-the-art machine learning techniques such as Naïve Bayes, Regression Tree, Classification Tree, C 4.5, Random Forest, Support Vector Machine, and CN2 learners were tested for predicting eight epi-macrobenthic species diversity metrics as a function of the class number. The Random Forest outperformed other models with a three-discretized Simpson index applied to epi-macrobenthic communities, explaining 69% (Classification Accuracy of its variability by mean bathymetry, time range and skewness derived from the LiDAR waveform. Corroborating marine ecological theory, areas with low Simpson epi-macrobenthic diversity responded to low water depths, high skewness and time range, whereas higher Simpson diversity relied upon deeper bottoms (correlated with stronger hydrodynamics and low skewness and time range. The degree of species heterogeneity was therefore positively linked with the degree of the structural complexity of the benthic cover. This work underpins that fully exploited bathymetric LiDAR (not only bathymetrically derived by-products, coupled with proficient machine learner, is able to rapidly predict habitat characteristics at a spatial resolution relevant to epi-macrobenthos diversity, ranging from clear to

  15. Improving mass candidate detection in mammograms via feature maxima propagation and local feature selection.

    Science.gov (United States)

    Melendez, Jaime; Sánchez, Clara I; van Ginneken, Bram; Karssemeijer, Nico

    2014-08-01

    Mass candidate detection is a crucial component of multistep computer-aided detection (CAD) systems. It is usually performed by combining several local features by means of a classifier. When these features are processed on a per-image-location basis (e.g., for each pixel), mismatching problems may arise while constructing feature vectors for classification, which is especially true when the behavior expected from the evaluated features is a peaked response due to the presence of a mass. In this study, two of these problems, consisting of maxima misalignment and differences of maxima spread, are identified and two solutions are proposed. The first proposed method, feature maxima propagation, reproduces feature maxima through their neighboring locations. The second method, local feature selection, combines different subsets of features for different feature vectors associated with image locations. Both methods are applied independently and together. The proposed methods are included in a mammogram-based CAD system intended for mass detection in screening. Experiments are carried out with a database of 382 digital cases. Sensitivity is assessed at two sets of operating points. The first one is the interval of 3.5-15 false positives per image (FPs/image), which is typical for mass candidate detection. The second one is 1 FP/image, which allows to estimate the quality of the mass candidate detector's output for use in subsequent steps of the CAD system. The best results are obtained when the proposed methods are applied together. In that case, the mean sensitivity in the interval of 3.5-15 FPs/image significantly increases from 0.926 to 0.958 (p < 0.0002). At the lower rate of 1 FP/image, the mean sensitivity improves from 0.628 to 0.734 (p < 0.0002). Given the improved detection performance, the authors believe that the strategies proposed in this paper can render mass candidate detection approaches based on image location classification more robust to feature

  16. Detection of Fraudulent Emails by Employing Advanced Feature Abundance

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Glasdam, Mathies

    2014-01-01

    In this paper, we present a fraudulent email detection model using advanced feature choice. We extracted various kinds of features and compared the performance of each category of features with the others in terms of the fraudulent email detection rate. The different types of features...... are incorporated step by step. The detection of fraudulent email has been considered as a classification problem and it is evaluated using various state-of-the art algorithms and on CCM [1] which is authors' previous cluster based classification model. The experiments have been performed on diverse feature sets...... and the different classification methods. The comparison of the results is also presented and the evaluations shows that for the fraudulent email detection tasks, the feature set is more important regardless of classification method. The results of the study suggest that the task of fraudulent emails detection...

  17. Detection of fraudulent emails by employing advanced feature abundance

    Directory of Open Access Journals (Sweden)

    Sarwat Nizamani

    2014-11-01

    Full Text Available In this paper, we present a fraudulent email detection model using advanced feature choice. We extracted various kinds of features and compared the performance of each category of features with the others in terms of the fraudulent email detection rate. The different types of features are incorporated step by step. The detection of fraudulent email has been considered as a classification problem and it is evaluated using various state-of-the art algorithms and on CCM (Nizamani et al., 2011 [1] which is authors’ previous cluster based classification model. The experiments have been performed on diverse feature sets and the different classification methods. The comparison of the results is also presented and the evaluation show that for the fraudulent email detection tasks, the feature set is more important regardless of classification method. The results of the study suggest that the task of fraudulent emails detection requires the better choice of feature set; while the choice of classification method is of less importance.

  18. Modeling and Analysis of Integrated Bathymetric and Geodetic Data for Inventory Surveys of Mining Water Reservoirs

    Directory of Open Access Journals (Sweden)

    Ochałek Agnieszka

    2018-01-01

    Full Text Available The significant part of the hydrography is bathymetry, which is the empirical part of it. Bathymetry is the study of underwater depth of waterways and reservoirs, and graphic presentation of measured data in form of bathymetric maps, cross-sections and three-dimensional bottom models. The bathymetric measurements are based on using Global Positioning System and devices for hydrographic measurements – an echo sounder and a side sonar scanner. In this research authors focused on introducing the case of obtaining and processing the bathymetrical data, building numerical bottom models of two post-mining reclaimed water reservoirs: Dwudniaki Lake in Wierzchosławice and flooded quarry in Zabierzów. The report includes also analysing data from still operating mining water reservoirs located in Poland to depict how bathymetry can be used in mining industry. The significant issue is an integration of bathymetrical data and geodetic data from tachymetry, terrestrial laser scanning measurements.

  19. Spatial features register: toward standardization of spatial features

    Science.gov (United States)

    Cascio, Janette

    1994-01-01

    As the need to share spatial data increases, more than agreement on a common format is needed to ensure that the data is meaningful to both the importer and the exporter. Effective data transfer also requires common definitions of spatial features. To achieve this, part 2 of the Spatial Data Transfer Standard (SDTS) provides a model for a spatial features data content specification and a glossary of features and attributes that fit this model. The model provides a foundation for standardizing spatial features. The glossary now contains only a limited subset of hydrographic and topographic features. For it to be useful, terms and definitions must be included for other categories, such as base cartographic, bathymetric, cadastral, cultural and demographic, geodetic, geologic, ground transportation, international boundaries, soils, vegetation, water, and wetlands, and the set of hydrographic and topographic features must be expanded. This paper will review the philosophy of the SDTS part 2 and the current plans for creating a national spatial features register as one mechanism for maintaining part 2.

  20. Bathymetric maps and water-quality profiles of Table Rock and North Saluda Reservoirs, Greenville County, South Carolina

    Science.gov (United States)

    Clark, Jimmy M.; Journey, Celeste A.; Nagle, Doug D.; Lanier, Timothy H.

    2014-01-01

    Lakes and reservoirs are the water-supply source for many communities. As such, water-resource managers that oversee these water supplies require monitoring of the quantity and quality of the resource. Monitoring information can be used to assess the basic conditions within the reservoir and to establish a reliable estimate of storage capacity. In April and May 2013, a global navigation satellite system receiver and fathometer were used to collect bathymetric data, and an autonomous underwater vehicle was used to collect water-quality and bathymetric data at Table Rock Reservoir and North Saluda Reservoir in Greenville County, South Carolina. These bathymetric data were used to create a bathymetric contour map and stage-area and stage-volume relation tables for each reservoir. Additionally, statistical summaries of the water-quality data were used to provide a general description of water-quality conditions in the reservoirs.

  1. Space moving target detection using time domain feature

    Science.gov (United States)

    Wang, Min; Chen, Jin-yong; Gao, Feng; Zhao, Jin-yu

    2018-01-01

    The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space moving target detection method based on time domain features. We firstly construct the time spectral data of star map, then analyze the time domain features of the main objects (target, stars and the background) in star maps, finally detect the moving targets using single pulse feature of the time domain signal. The real star map target detection experimental results show that the proposed method can effectively detect the trajectory of moving targets in the star map sequence, and the detection probability achieves 99% when the false alarm rate is about 8×10-5, which outperforms those of compared algorithms.

  2. Sensitivity and spin-up times of cohesive sediment transport models used to simulate bathymetric change: Chapter 31

    Science.gov (United States)

    Schoellhamer, D.H.; Ganju, N.K.; Mineart, P.R.; Lionberger, M.A.; Kusuda, T.; Yamanishi, H.; Spearman, J.; Gailani, J. Z.

    2008-01-01

    Bathymetric change in tidal environments is modulated by watershed sediment yield, hydrodynamic processes, benthic composition, and anthropogenic activities. These multiple forcings combine to complicate simple prediction of bathymetric change; therefore, numerical models are necessary to simulate sediment transport. Errors arise from these simulations, due to inaccurate initial conditions and model parameters. We investigated the response of bathymetric change to initial conditions and model parameters with a simplified zero-dimensional cohesive sediment transport model, a two-dimensional hydrodynamic/sediment transport model, and a tidally averaged box model. The zero-dimensional model consists of a well-mixed control volume subjected to a semidiurnal tide, with a cohesive sediment bed. Typical cohesive sediment parameters were utilized for both the bed and suspended sediment. The model was run until equilibrium in terms of bathymetric change was reached, where equilibrium is defined as less than the rate of sea level rise in San Francisco Bay (2.17 mm/year). Using this state as the initial condition, model parameters were perturbed 10% to favor deposition, and the model was resumed. Perturbed parameters included, but were not limited to, maximum tidal current, erosion rate constant, and critical shear stress for erosion. Bathymetric change was most sensitive to maximum tidal current, with a 10% perturbation resulting in an additional 1.4 m of deposition over 10 years. Re-establishing equilibrium in this model required 14 years. The next most sensitive parameter was the critical shear stress for erosion; when increased 10%, an additional 0.56 m of sediment was deposited and 13 years were required to re-establish equilibrium. The two-dimensional hydrodynamic/sediment transport model was calibrated to suspended-sediment concentration, and despite robust solution of hydrodynamic conditions it was unable to accurately hindcast bathymetric change. The tidally averaged

  3. International Bathymetric Chart of the Arctic Ocean, Version 2.23

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The goal of this initiative is to develop a digital data base that contains all available bathymetric data north of 64 degrees North, for use by mapmakers,...

  4. International Bathymetric Chart of the Arctic Ocean, Version 1.0

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The goal of this initiative is to develop a digital data base that contains all available bathymetric data north of 64 degrees North, for use by mapmakers,...

  5. Procedural Documentation and Accuracy Assessment of Bathymetric Maps and Area/Capacity Tables for Small Reservoirs

    Science.gov (United States)

    Wilson, Gary L.; Richards, Joseph M.

    2006-01-01

    Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.

  6. Adapting Local Features for Face Detection in Thermal Image

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2017-11-01

    Full Text Available A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses. We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  7. Impact of bathymetric system advances on hydrography

    Digital Repository Service at National Institute of Oceanography (India)

    Ranade, G.

    undergone unprecedented changes with the advancement in the motion sensor technology. By late 1970 gyro stabilized accelerometer based attitude monitoring systems, computing roll, pitch and heave sensing and came in to existence. Doppler sonar principle...). “Multibeam bathymetric sonar: Sea Beam and Hydrochart”, Mar. Geod., vol. 4, pp.77-93. 4. Urick, R. (1975), “Principles of Underwater Acoustics”, McGraw – Hill. 5. Christian de Moustier, C. (1987). “Online Sea Beam acoustic imaging, Proc. Oceans ‘87...

  8. Hybrid feature selection for supporting lightweight intrusion detection systems

    Science.gov (United States)

    Song, Jianglong; Zhao, Wentao; Liu, Qiang; Wang, Xin

    2017-08-01

    Redundant and irrelevant features not only cause high resource consumption but also degrade the performance of Intrusion Detection Systems (IDS), especially when coping with big data. These features slow down the process of training and testing in network traffic classification. Therefore, a hybrid feature selection approach in combination with wrapper and filter selection is designed in this paper to build a lightweight intrusion detection system. Two main phases are involved in this method. The first phase conducts a preliminary search for an optimal subset of features, in which the chi-square feature selection is utilized. The selected set of features from the previous phase is further refined in the second phase in a wrapper manner, in which the Random Forest(RF) is used to guide the selection process and retain an optimized set of features. After that, we build an RF-based detection model and make a fair comparison with other approaches. The experimental results on NSL-KDD datasets show that our approach results are in higher detection accuracy as well as faster training and testing processes.

  9. Bathymetric Structure from Motion Photogrammetry: Extracting stream bathymetry from multi-view stereo photogrammetry

    Science.gov (United States)

    Dietrich, J. T.

    2016-12-01

    Stream bathymetry is a critical variable in a number of river science applications. In larger rivers, bathymetry can be measured with instruments such as sonar (single or multi-beam), bathymetric airborne LiDAR, or acoustic doppler current profilers. However, in smaller streams with depths less than 2 meters, bathymetry is one of the more difficult variables to map at high-resolution. Optical remote sensing techniques offer several potential solutions for collecting high-resolution bathymetry. In this research, I focus on direct photogrammetric measurements of bathymetry using multi-view stereo photogrammetry, specifically Structure from Motion (SfM). The main barrier to accurate bathymetric mapping with any photogrammetric technique is correcting for the refraction of light as it passes between the two different media (air and water), which causes water depths to appear shallower than they are. I propose and test an iterative approach that calculates a series of refraction correction equations for every point/camera combination in a SfM point cloud. This new method is meant to address shortcomings of other correction techniques and works within the current preferred method for SfM data collection, oblique and highly convergent photographs. The multi-camera refraction correction presented here produces bathymetric datasets with accuracies of 0.02% of the flying height and precisions of 0.1% of the flying height. This methodology, like many fluvial remote sensing methods, will only work under ideal conditions (e.g. clear water), but it provides an additional tool for collecting high-resolution bathymetric datasets for a variety of river, coastal, and estuary systems.

  10. Studies of high resolution array processing algorithms for multibeam bathymetric applications

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Schenke, H.W.

    In this paper a study is initiated to observe the usefulness of directional spectral estimation techniques for underwater bathymetric applications. High resolution techniques like the Maximum Likelihood (ML) method and the Maximum Entropy (ME...

  11. Rip current evidence by hydrodynamic simulations, bathymetric surveys and UAV observation

    Directory of Open Access Journals (Sweden)

    G. Benassai

    2017-09-01

    Full Text Available The prediction of the formation, spacing and location of rip currents is a scientific challenge that can be achieved by means of different complementary methods. In this paper the analysis of numerical and experimental data, including RPAS (remotely piloted aircraft systems observations, allowed us to detect the presence of rip currents and rip channels at the mouth of Sele River, in the Gulf of Salerno, southern Italy. The dataset used to analyze these phenomena consisted of two different bathymetric surveys, a detailed sediment analysis and a set of high-resolution wave numerical simulations, completed with Google EarthTM images and RPAS observations. The grain size trend analysis and the numerical simulations allowed us to identify the rip current occurrence, forced by topographically constrained channels incised on the seabed, which were compared with observations.

  12. Volcanic and Hydrothermal Activity of the North Su Volcano: New Insights from Repeated Bathymetric Surveys and ROV Observations

    Science.gov (United States)

    Thal, J.; Bach, W.; Tivey, M.; Yoerger, D.

    2013-12-01

    Bathymetric data from cruises in 2002, 2006, and 2011 were combined and compared to determine the evolution of volcanic activity, seafloor structures, erosional features and to identify and document the distribution of hydrothermal vents on North Su volcano, SuSu Knolls, eastern Manus Basin (Papua New Guinea). Geologic mapping based on ROV observations from 2006 (WHOI Jason-2) and 2011 (MARUM Quest-4000) combined with repeated bathymetric surveys from 2002 and 2011 are used to identify morphologic features on the slopes of North Su and to track temporal changes. ROV MARUM Quest-4000 bathymetry was used to develop a 10 m grid of the top of North Su to precisely depict recent changes. In 2006, the south slope of North Su was steeply sloped and featured numerous white smoker vents discharging acid sulfate waters. These vents were covered by several tens of meters of sand- to gravel-sized volcanic material in 2011. The growth of this new cone changed the bathymetry of the south flank of North Su up to ~50 m and emplaced ~0.014 km3 of clastic volcanic material. This material is primarily comprised of fractured altered dacite and massive fresh dacite as well as crystals of opx, cpx, olivine and plagioclase. There is no evidence for pyroclastic fragmentation, so we hypothesize that the fragmentation is likely related to hydrothermal explosions. Hydrothermal activity varies over a short (~50 m) lateral distance from 'flashing' black smokers to acidic white smoker vents. Within 2 weeks of observation time in 2011, the white smoker vents varied markedly in activity suggesting a highly episodic hydrothermal system. Based on ROV video recordings, we identified steeply sloping (up to 30°) slopes exposing pillars and walls of hydrothermal cemented volcaniclastic material representing former fluid upflow zones. These features show that hydrothermal activity has increased slope stability as hydrothermal cementation has prevented slope collapse. Additionally, in some places

  13. Bathymetric survey of Carroll Creek Tributary to Lake Tuscaloosa, Tuscaloosa County, Alabama, 2010

    Science.gov (United States)

    Lee, K.G.; Kimbrow, D.R.

    2011-01-01

    The U.S. Geological Survey, in cooperation with the City of Tuscaloosa, conducted a bathymetric survey of Carroll Creek, on May 12-13, 2010. Carroll Creek is one of the major tributaries to Lake Tuscaloosa and contributes about 6 percent of the surface drainage area. A 3.5-mile reach of Carroll Creek was surveyed to prepare a current bathymetric map, determine storage capacities at specified water-surface elevations, and compare current conditions to historical cross sections. Bathymetric data were collected using a high-resolution interferometric mapping system consisting of a phase-differencing bathymetric sonar, navigation and motion-sensing system, and a data acquisition computer. To assess the accuracy of the interferometric mapping system and document depths in shallow areas of the study reach, an electronic total station was used to survey 22 cross sections spaced 50 feet apart. The data were combined and processed and a Triangulated Irregular Network (TIN) and contour map were generated. Cross sections were extracted from the TIN and compared with historical cross sections. Between 2004 and 2010, the area (cross section 1) at the confluence of Carroll Creek and the main run of LakeTuscaloosa showed little to no change in capacity area. Another area (cross section 2) showed a maximum change in elevation of 4 feet and an average change of 3 feet. At the water-surface elevation of 224 feet (National Geodetic Vertical Datum of 1929), the cross-sectional area has changed by 260 square feet for a total loss of 28 percent of cross-sectional storage area. The loss of area may be attributed to sedimentation in Carroll Creek and (or) the difference in accuracy between the two surveys.

  14. Boosting instance prototypes to detect local dermoscopic features.

    Science.gov (United States)

    Situ, Ning; Yuan, Xiaojing; Zouridakis, George

    2010-01-01

    Local dermoscopic features are useful in many dermoscopic criteria for skin cancer detection. We address the problem of detecting local dermoscopic features from epiluminescence (ELM) microscopy skin lesion images. We formulate the recognition of local dermoscopic features as a multi-instance learning (MIL) problem. We employ the method of diverse density (DD) and evidence confidence (EC) function to convert MIL to a single-instance learning (SIL) problem. We apply Adaboost to improve the classification performance with support vector machines (SVMs) as the base classifier. We also propose to boost the selection of instance prototypes through changing the data weights in the DD function. We validate the methods on detecting ten local dermoscopic features from a dataset with 360 images. We compare the performance of the MIL approach, its boosting version, and a baseline method without using MIL. Our results show that boosting can provide performance improvement compared to the other two methods.

  15. Bathymetric study of the Neotectonic Naini Lake in outer Kumaun Himalaya

    Digital Repository Service at National Institute of Oceanography (India)

    Hashimi, N.H.; Pathak, M.C; Jauhari, P.; Nair, R.R.; Sharma, A.K.; Bhakuni, D.S.; Bisht, M.K.S.; Valdiya, K.S.

    The Naini Lake is a product of rotational movement on a NW-SE trending Nainital Fault, quite after the establishment of the drainage of a mature stream named Balia Nala. Detailed bathymetric study, permits division of this crescent-shaped lake...

  16. Using a personal watercraft for monitoring bathymetric changes at storm scale

    NARCIS (Netherlands)

    Van Son, S.T.J.; Lindenbergh, R.C.; De Schipper, M.A.; De Vries, S.; Duijnmayer, K.

    2009-01-01

    Monitoring and understanding coastal processes is important for the Netherlands since the most densely populated areas are situated directly behind the coastal defense. Traditionally, bathymetric changes are monitored at annual intervals, although nowadays it is understood that most dramatic changes

  17. CRED Fagatele Bay National Marine Sanctuary Bathymetric Position Index Habitat Zones 2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetric Position Index (BPI) Zones derived from derivatives of Simrad EM-3000 multibeam bathymetry (3 m resolution). BPI zones are surficial characteristics of...

  18. Multiple-Features-Based Semisupervised Clustering DDoS Detection Method

    Directory of Open Access Journals (Sweden)

    Yonghao Gu

    2017-01-01

    Full Text Available DDoS attack stream from different agent host converged at victim host will become very large, which will lead to system halt or network congestion. Therefore, it is necessary to propose an effective method to detect the DDoS attack behavior from the massive data stream. In order to solve the problem that large numbers of labeled data are not provided in supervised learning method, and the relatively low detection accuracy and convergence speed of unsupervised k-means algorithm, this paper presents a semisupervised clustering detection method using multiple features. In this detection method, we firstly select three features according to the characteristics of DDoS attacks to form detection feature vector. Then, Multiple-Features-Based Constrained-K-Means (MF-CKM algorithm is proposed based on semisupervised clustering. Finally, using MIT Laboratory Scenario (DDoS 1.0 data set, we verify that the proposed method can improve the convergence speed and accuracy of the algorithm under the condition of using a small amount of labeled data sets.

  19. CRED Fagatele Bay National Marine Sanctuary Bathymetric Position Index Habitat Structures 2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetric Position Index (BPI) Structures are derived from derivatives of Simrad EM-3000 multibeam bathymetry (1 m and 3 m resolution). BPI structures are...

  20. Quantifying the impact of bathymetric changes on the hydrological regimes in a large floodplain lake: Poyang Lake

    Science.gov (United States)

    Yao, Jing; Zhang, Qi; Ye, Xuchun; Zhang, Dan; Bai, Peng

    2018-06-01

    The hydrological regime of a lake is largely dependent on its bathymetry. A dramatic water level reduction has occurred in Poyang Lake in recent years, coinciding with significant bed erosion. Few studies have focused on the influence of bathymetric changes on the hydrological regime in such a complex river-lake floodplain system. This study combined hydrological data and a physically based hydrodynamic model to quantify the influence of the bathymetric changes (1998-2010) on the water level spatiotemporal distribution in Poyang Lake, based on a dry year (2006), a wet year (2010) and an average year (2000-2010). The following conclusions can be drawn from the results of this study: (1) The bed erosion of the northern outlet channel averaged 3 m, resulting in a decrease in the water level by 1.2-2 m in the northern channels (the most significantly influenced areas) and approximately 0.3 m in the central lake areas during low-level periods. The water levels below 16 m and 14 m were significantly affected during the rising period and recession period, respectively. The water level reduction was enhanced due to lower water levels. (2) The water surface profiles adjusted, and the rising and recession rates of the water level increased by 0.5-3.1 cm/d at the lake outlet. The bathymetric influence extended across the entire lake due to the emptying effect, resulting in a change in the water level distribution. The average annual outflow increased by 6.8%. (3) The bathymetric changes contributed approximately 14.4% to the extreme low water level in autumn 2006 and enhanced the drought in the dry season. This study quantified the impact of the bathymetric changes on the lake water levels, thereby providing a better understanding of the potential effects of continued sand mining operations and providing scientific explanations for the considerable variations in the hydrological regimes of Poyang Lake. Moreover, this study attempts to provide a reference for the assessment of

  1. Using Polarization features of visible light for automatic landmine detection

    NARCIS (Netherlands)

    Jong, W. de; Schavemaker, J.G.M.

    2007-01-01

    This chapter describes the usage of polarization features of visible light for automatic landmine detection. The first section gives an introduction to land-mine detection and the usage of camera systems. In section 2 detection concepts and methods that use polarization features are described.

  2. Application of an Autonomous/Unmanned Survey Vessel (ASV/USV in Bathymetric Measurements

    Directory of Open Access Journals (Sweden)

    Specht Cezary

    2017-09-01

    Full Text Available The accuracy of bathymetric maps, especially in the coastal zone, is very important from the point of view of safety of navigation and transport. Due to the continuous change in shape of the seabed, these maps are fast becoming outdated for precise navigation. Therefore, it is necessary to perform periodical bathymetric measurements to keep them updated on a current basis. At present, none of the institutions in Poland (maritime offices, Hydrographic Office of the Polish Navy which are responsible for implementation of this type of measurements has at their disposal a hydrographic vessel capable of carrying out measurements for shallow waters (at depths below 1 m. This results in emergence of large areas for which no measurement data have been obtained and, consequently, the maps in the coastal zones are rather unreliable.

  3. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Nijholt, Antinus; Nijholt, A.; Pantic, M.; Pantic, Maja; Poel, Mannes; Poel, M.; Hondorp, G.H.W.

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved

  4. Bathymetric map and area/capacity table for Castle Lake, Washington

    Science.gov (United States)

    Mosbrucker, Adam R.; Spicer, Kurt R.

    2017-11-14

    The May 18, 1980, eruption of Mount St. Helens produced a 2.5-cubic-kilometer debris avalanche that dammed South Fork Castle Creek, causing Castle Lake to form behind a 20-meter-tall blockage. Risk of a catastrophic breach of the newly impounded lake led to outlet channel stabilization work, aggressive monitoring programs, mapping efforts, and blockage stability studies. Despite relatively large uncertainty, early mapping efforts adequately supported several lake breakout models, but have limited applicability to current lake monitoring and hazard assessment. Here, we present the results of a bathymetric survey conducted in August 2012 with the purpose of (1) verifying previous volume estimates, (2) computing an area/capacity table, and (3) producing a bathymetric map. Our survey found seasonal lake volume ranges between 21.0 and 22.6 million cubic meters with a fundamental vertical accuracy representing 0.88 million cubic meters. Lake surface area ranges between 1.13 and 1.16 square kilometers. Relationships developed by our results allow the computation of lake volume from near real-time lake elevation measurements or from remotely sensed imagery.

  5. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  6. Bathymetric survey of the Cayuga Inlet flood-control channel and selected tributaries in Ithaca, New York, 2016

    Science.gov (United States)

    Wernly, John F.; Nystrom, Elizabeth A.; Coon, William F.

    2017-09-08

    From July 14 to July 20, 2016, the U.S. Geological Survey, in cooperation with the City of Ithaca, New York, and the New York State Department of State, surveyed the bathymetry of the Cayuga Inlet flood-control channel and the mouths of selected tributaries to Cayuga Inlet and Cayuga Lake in Ithaca, N.Y. The flood-control channel, built by the U.S. Army Corps of Engineers between 1965 and 1970, was designed to convey flood flows from the Cayuga Inlet watershed through the City of Ithaca and minimize possible flood damages. Since that time, the channel has infrequently been maintained by dredging, and sediment accumulation and resultant shoaling have greatly decreased the conveyance of the channel and its navigational capability.U.S. Geological Survey personnel collected bathymetric data by using an acoustic Doppler current profiler. The survey produced a dense dataset of water depths that were converted to bottom elevations. These elevations were then used to generate a geographic information system bathymetric surface. The bathymetric data and resultant bathymetric surface show the current condition of the channel and provide the information that governmental agencies charged with maintaining the Cayuga Inlet for flood-control and navigational purposes need to make informed decisions regarding future maintenance measures.

  7. Fall Detection Using Smartphone Audio Features.

    Science.gov (United States)

    Cheffena, Michael

    2016-07-01

    An automated fall detection system based on smartphone audio features is developed. The spectrogram, mel frequency cepstral coefficents (MFCCs), linear predictive coding (LPC), and matching pursuit (MP) features of different fall and no-fall sound events are extracted from experimental data. Based on the extracted audio features, four different machine learning classifiers: k-nearest neighbor classifier (k-NN), support vector machine (SVM), least squares method (LSM), and artificial neural network (ANN) are investigated for distinguishing between fall and no-fall events. For each audio feature, the performance of each classifier in terms of sensitivity, specificity, accuracy, and computational complexity is evaluated. The best performance is achieved using spectrogram features with ANN classifier with sensitivity, specificity, and accuracy all above 98%. The classifier also has acceptable computational requirement for training and testing. The system is applicable in home environments where the phone is placed in the vicinity of the user.

  8. Bathymetric survey and digital elevation model of Little Holland Tract, Sacramento-San Joaquin Delta, California

    Science.gov (United States)

    Snyder, Alexander G.; Lacy, Jessica R.; Stevens, Andrew W.; Carlson, Emily M.

    2016-06-10

    The U.S. Geological Survey conducted a bathymetric survey in Little Holland Tract, a flooded agricultural tract, in the northern Sacramento-San Joaquin Delta (the “Delta”) during the summer of 2015. The new bathymetric data were combined with existing data to generate a digital elevation model (DEM) at 1-meter resolution. Little Holland Tract (LHT) was historically diked off for agricultural uses and has been tidally inundated since an accidental levee breach in 1983. Shallow tidal regions such as LHT have the potential to improve habitat quality in the Delta. The DEM of LHT was developed to support ongoing studies of habitat quality in the area and to provide a baseline for evaluating future geomorphic change. The new data comprise 138,407 linear meters of real-time-kinematic (RTK) Global Positioning System (GPS) elevation data, including both bathymetric data collected from personal watercraft and topographic elevations collected on foot at low tide. A benchmark (LHT15_b1) was established for geodetic control of the survey. Data quality was evaluated both by comparing results among surveying platforms, which showed systematic offsets of 1.6 centimeters (cm) or less, and by error propagation, which yielded a mean vertical uncertainty of 6.7 cm. Based on the DEM and time-series measurements of water depth, the mean tidal prism of LHT was determined to be 2,826,000 cubic meters. The bathymetric data and DEM are available at http://dx.doi.org/10.5066/F7RX9954. 

  9. The utility of bathymetric echo sounding data in modelling benthic impacts using NewDEPOMOD driven by an FVCOM model.

    Science.gov (United States)

    Rochford, Meghan; Black, Kenneth; Aleynik, Dmitry; Carpenter, Trevor

    2017-04-01

    The Scottish Environmental Protection Agency (SEPA) are currently implementing new regulations for consenting developments at new and pre-existing fish farms. Currently, a 15-day current record from multiple depths at one location near the site is required to run DEPOMOD, a depositional model used to determine the depositional footprint of waste material from fish farms, developed by Cromey et al. (2002). The present project involves modifying DEPOMOD to accept data from 3D hydrodynamic models to allow for a more accurate representation of the currents around the farms. Bathymetric data are key boundary conditions for accurate modelling of current velocity data. The aim of the project is to create a script that will use the outputs from FVCOM, a 3D hydrodynamic model developed by Chen et al. (2003), and input them into NewDEPOMOD (a new version of DEPOMOD with more accurately parameterised sediment transport processes) to determine the effect of a fish farm on the surrounding environment. This study compares current velocity data under two scenarios; the first, using interpolated bathymetric data, and the second using bathymetric data collected during a bathymetric echo sounding survey of the site. Theoretically, if the hydrodynamic model is of high enough resolution, the two scenarios should yield relatively similar results. However, the expected result is that the survey data will be of much higher resolution and therefore of better quality, producing more realistic velocity results. The improvement of bathymetric data will also improve sediment transport predictions in NewDEPOMOD. This work will determine the sensitivity of model predictions to bathymetric data accuracy at a range of sites with varying bathymetric complexity and thus give information on the potential costs and benefits of echo sounding survey data inputs. Chen, C., Liu, H. and Beardsley, R.C., 2003. An unstructured grid, finite-volume, three-dimensional, primitive equations ocean model

  10. Joint Interpretation of Bathymetric and Gravity Anomaly Maps Using Cross and Dot-Products.

    Science.gov (United States)

    Jilinski, Pavel; Fontes, Sergio Luiz

    2010-05-01

    0.1 Summary We present the results of joint map interpretation technique based on cross and dot-products applied to bathymetric and gravity anomaly gradients maps. According to the theory (Gallardo, Meju, 2004) joint interpretation of different gradient characteristics help to localize and empathize patterns unseen on one image interpretation and gives information about the correlation of different spatial data. Values of angles between gradients and their cross and dot-product were used. This technique helps to map unseen relations between bathymetric and gravity anomaly maps if they are analyzed separately. According to the method applied for the southern segment of Eastern-Brazilian coast bathymetrical and gravity anomaly gradients indicates a strong source-effect relation between them. The details of the method and the obtained results are discussed. 0.2 Introduction We applied this method to investigate the correlation between bathymetric and gravity anomalies at the southern segment of the Eastern-Brazilian coast. Gridded satellite global marine gravity data and bathymetrical data were used. The studied area is located at the Eastern- Brazilian coast between the 20° W and 30° W meridians and 15° S and 25° S parallels. The volcanic events responsible for the uncommon width of the continental shelf at the Abrolhos bank also were responsible for the formation of the Abrolhos islands and seamounts including the major Vitoria-Trindade chain. According to the literature this volcanic structures are expected to have a corresponding gravity anomaly (McKenzie, 1976, Zembruscki, S.G. 1979). The main objective of this study is to develop and test joint image interpretation method to compare spatial data and analyze its relations. 0.3 Theory and Method 0.3.1 Data sources The bathymetrical satellite data were derived bathymetry 2-minute grid of the ETOPO2v2 obtained from NOAA's National Geophysical Data Center (http://www.ngdc.noaa.gov). The satellite marine gravity 1

  11. Cloud Detection by Fusing Multi-Scale Convolutional Features

    Science.gov (United States)

    Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang

    2018-04-01

    Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.

  12. Deep Spatial-Temporal Joint Feature Representation for Video Object Detection.

    Science.gov (United States)

    Zhao, Baojun; Zhao, Boya; Tang, Linbo; Han, Yuqi; Wang, Wenzheng

    2018-03-04

    With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).

  13. Prostate cancer detection: Fusion of cytological and textural features.

    Science.gov (United States)

    Nguyen, Kien; Jain, Anil K; Sabata, Bikash

    2011-01-01

    A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification.

  14. Detection of emotional faces: salient physical features guide effective visual search.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  15. Prostate cancer detection: Fusion of cytological and textural features

    Directory of Open Access Journals (Sweden)

    Kien Nguyen

    2011-01-01

    Full Text Available A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i to locate cancer regions in a large digitized tissue biopsy, and (ii to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000x7,000 pixels and 1 1 whole-slide test images (each of which has approximately 5,000x23,000 pixels. All images are at 20X magnification.

  16. Deep Spatial-Temporal Joint Feature Representation for Video Object Detection

    Directory of Open Access Journals (Sweden)

    Baojun Zhao

    2018-03-01

    Full Text Available With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP.

  17. Computed Tomography Features of Incidentally Detected Diffuse Thyroid Disease

    Directory of Open Access Journals (Sweden)

    Myung Ho Rho

    2014-01-01

    Full Text Available Objective. This study aimed to evaluate the CT features of incidentally detected DTD in the patients who underwent thyroidectomy and to assess the diagnostic accuracy of CT diagnosis. Methods. We enrolled 209 consecutive patients who received preoperative neck CT and subsequent thyroid surgery. Neck CT in each case was retrospectively investigated by a single radiologist. We evaluated the diagnostic accuracy of individual CT features and the cut-off CT criteria for detecting DTD by comparing the CT features with histopathological results. Results. Histopathological examination of the 209 cases revealed normal thyroid (n=157, Hashimoto thyroiditis (n=17, non-Hashimoto lymphocytic thyroiditis (n=34, and diffuse hyperplasia (n=1. The CT features suggestive of DTD included low attenuation, inhomogeneous attenuation, increased glandular size, lobulated margin, and inhomogeneous enhancement. ROC curve analysis revealed that CT diagnosis of DTD based on the CT classification of “3 or more” abnormal CT features was superior. When the “3 or more” CT classification was selected, the sensitivity, specificity, positive and negative predictive values, and accuracy of CT diagnosis for DTD were 55.8%, 95.5%, 80.6%, 86.7%, and 85.6%, respectively. Conclusion. Neck CT may be helpful for the detection of incidental DTD.

  18. Delving Deep into Multiscale Pedestrian Detection via Single Scale Feature Maps

    Directory of Open Access Journals (Sweden)

    Xinchuan Fu

    2018-04-01

    Full Text Available The standard pipeline in pedestrian detection is sliding a pedestrian model on an image feature pyramid to detect pedestrians of different scales. In this pipeline, feature pyramid construction is time consuming and becomes the bottleneck for fast detection. Recently, a method called multiresolution filtered channels (MRFC was proposed which only used single scale feature maps to achieve fast detection. However, there are two shortcomings in MRFC which limit its accuracy. One is that the receptive field correspondence in different scales is weak. Another is that the features used are not scale invariance. In this paper, two solutions are proposed to tackle with the two shortcomings respectively. Specifically, scale-aware pooling is proposed to make a better receptive field correspondence, and soft decision tree is proposed to relive scale variance problem. When coupled with efficient sliding window classification strategy, our detector achieves fast detecting speed at the same time with state-of-the-art accuracy.

  19. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  20. Feature Extraction and Fusion Using Deep Convolutional Neural Networks for Face Detection

    Directory of Open Access Journals (Sweden)

    Xiaojun Lu

    2017-01-01

    Full Text Available This paper proposes a method that uses feature fusion to represent images better for face detection after feature extraction by deep convolutional neural network (DCNN. First, with Clarifai net and VGG Net-D (16 layers, we learn features from data, respectively; then we fuse features extracted from the two nets. To obtain more compact feature representation and mitigate computation complexity, we reduce the dimension of the fused features by PCA. Finally, we conduct face classification by SVM classifier for binary classification. In particular, we exploit offset max-pooling to extract features with sliding window densely, which leads to better matches of faces and detection windows; thus the detection result is more accurate. Experimental results show that our method can detect faces with severe occlusion and large variations in pose and scale. In particular, our method achieves 89.24% recall rate on FDDB and 97.19% average precision on AFW.

  1. Background area effects on feature detectability in CT and uncorrelated noise

    International Nuclear Information System (INIS)

    Swensson, R.G.; Judy, P.F.

    1987-01-01

    Receiver operating characteristic curve measures of feature detectability decrease substantially when the surrounding area of uniform-noise background is small relative to that of the feature itself. The effect occurs with both fixed and variable-level backgrounds, but differs in form for CT and uncorrelated noise. Cross-correlation image calculations can only predict these effects by treating feature detection as the discrimination of a local change (a ''feature'') from the estimated level of an assumed-uniform region of background

  2. MICROVEGA (MICRO VESSEL FOR GEODETICS APPLICATION: A MARINE DRONE FOR THE ACQUISITION OF BATHYMETRIC DATA FOR GIS APPLICATIONS

    Directory of Open Access Journals (Sweden)

    F. Giordano

    2015-04-01

    Full Text Available Bathymetric data are fundamental to produce navigational chart and sea-floor 3D models. They can be collected using different techniques and sensors on board of a variety of platforms, such as satellite, aircraft, ship and drone. The MicroVEGA drone is an Open Prototype of Autonomous Unmanned Surface Vessel (AUSV conceived, designed and built to operate in the coastal areas (0-20 meters of depth, where a traditional boat is poorly manoeuvrable. It is equipped with a series of sensors to acquire the morpho-bathymetric high precision data. In this paper we presents the result of the first case study, a bathymetric survey carried out at Sorrento Marina Grande. This survey is a typical application case of this technology; the Open Prototype MicroVega has an interdisciplinary breath and it is going to be applied to various research fields. In future, it will expect to do new knowledge, new survey strategies and an industrial prototype in fiberglass.

  3. Degree of contribution (DoC) feature selection algorithm for structural brain MRI volumetric features in depression detection.

    Science.gov (United States)

    Kipli, Kuryati; Kouzani, Abbas Z

    2015-07-01

    Accurate detection of depression at an individual level using structural magnetic resonance imaging (sMRI) remains a challenge. Brain volumetric changes at a structural level appear to have importance in depression biomarkers studies. An automated algorithm is developed to select brain sMRI volumetric features for the detection of depression. A feature selection (FS) algorithm called degree of contribution (DoC) is developed for selection of sMRI volumetric features. This algorithm uses an ensemble approach to determine the degree of contribution in detection of major depressive disorder. The DoC is the score of feature importance used for feature ranking. The algorithm involves four stages: feature ranking, subset generation, subset evaluation, and DoC analysis. The performance of DoC is evaluated on the Duke University Multi-site Imaging Research in the Analysis of Depression sMRI dataset. The dataset consists of 115 brain sMRI scans of 88 healthy controls and 27 depressed subjects. Forty-four sMRI volumetric features are used in the evaluation. The DoC score of forty-four features was determined as the accuracy threshold (Acc_Thresh) was varied. The DoC performance was compared with that of four existing FS algorithms. At all defined Acc_Threshs, DoC outperformed the four examined FS algorithms for the average classification score and the maximum classification score. DoC has a good ability to generate reduced-size subsets of important features that could yield high classification accuracy. Based on the DoC score, the most discriminant volumetric features are those from the left-brain region.

  4. E/V Nautilus Detection of Isolated Features in the Eastern Pacific Ocean: Newly Discovered Calderas and Methane Seeps

    Science.gov (United States)

    Raineault, N.; Irish, O.; Lubetkin, M.

    2016-02-01

    The E/V Nautilus mapped over 80,000 km2 of the seafloor in the Gulf of Mexico and Eastern Pacific Ocean during its 2015 expedition. The Nautilus used its Kongsberg EM302 multibeam system to map the seafloor prior to remotely operated vehicle (ROV) dives, both for scientific purposes (site selection) and navigational safety. The Nautilus also routinely maps during transits to identify previously un-mapped or unresolved seafloor features. During its transit from the Galapagos Islands to the California Borderland, the Nautilus mapped 44,695 km2 of seafloor. Isolated features on the seafloor and in the water-column, such as calderas and methane seeps, were detected during this data collection effort. Operating at a frequency of 30 kHz in waters ranging from 1000-5500 m, we discovered caldera features off the coast of Central America. Since seamounts are known hotspots of biodiversity, locating new ones may enrich our understanding of seamounts as "stepping stones" for species distribution and ocean current pathways. Satellite altimetry datasets prior to this data either did not discern these calderas or recognized the presence of a bathymetric high without great detail. This new multibeam bathymetry data, gridded at 50 m, gives a precise look at these seamounts that range in elevation from 350 to 1400 m from abyssal depth. The largest of the calderas is circular in shape and is 10,000 m in length and 5,000 m in width, with a distinct circular depression at the center of its highest point, 1,400 m above the surrounding abyssal depth. In the California Borderland region, located between San Diego and Los Angeles, four new seeps were discovered in water depths from 400-1,020 m. ROV exploration of these seeps revealed vent communities. Altogether, these discoveries reinforce how little we know about the global ocean, indicate the presence of isolated deep-sea ecosystems that support biologically diverse communities, and will impact our understanding of seafloor habitat.

  5. Patch layout generation by detecting feature networks

    KAUST Repository

    Cao, Yuanhao; Yan, Dongming; Wonka, Peter

    2015-01-01

    The patch layout of 3D surfaces reveals the high-level geometric and topological structures. In this paper, we study the patch layout computation by detecting and enclosing feature loops on surfaces. We present a hybrid framework which combines

  6. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  7. Subtidal Bathymetric Changes by Shoreline Armoring Removal and Restoration Projects

    Science.gov (United States)

    Wallace, J.

    2016-12-01

    The Salish Sea, a region with a diverse coastline, is altered by anthropogenic shoreline modifications such as seawalls. In recent years, local organizations have moved to restore these shorelines. Current research monitors the changes restoration projects have on the upper beach, lower beach, and intertidal, however little research exists to record possible negative effects on the subtidal. The purpose of this research is to utilize multibeam sonar bathymetric data to analyze possible changes to the seafloor structure of the subtidal in response to shoreline modification and to investigate potential ecosystem consequences of shoreline alteration. The subtidal is home to several species including eelgrass (Zostera marina). Eelgrass is an important species in Puget Sound as it provides many key ecosystem functions including providing habitat for a wide variety of organisms, affecting the physics of waves, and sediment transport in the subtidal. Thus bathymetric changes could impact eelgrass growth and reduce its ability to provide crucial ecosystem services. Three Washington state study sites of completed shoreline restoration projects were used to generate data from areas of varied topographic classification, Seahurst Park in Burien, the Snohomish County Nearshore Restoration Project in Everett, and Cornet Bay State Park on Whidbey Island. Multibeam sonar data was acquired using a Konsberg EM 2040 system and post-processed in Caris HIPS to generate a base surface of one-meter resolution. It was then imported into the ArcGIS software suite for the generation of spatial metrics. Measurements of change were calculated through a comparison of historical and generated data. Descriptive metrics generated included, total elevation change, percent area changed, and a transition matrix of positive and negative change. Additionally, pattern metrics such as, surface roughness, and Bathymetric Position Index (BPI), were calculated. The comparison of historical data to new data

  8. Secular bathymetric variations of the North Channel in the Changjiang (Yangtze) Estuary, China, 1880-2013: Causes and effects

    Science.gov (United States)

    Mei, Xuefei; Dai, Zhijun; Wei, Wen; Li, Weihua; Wang, Jie; Sheng, Hao

    2018-02-01

    As the interface between the fluvial upland system and the open coast, global estuaries are facing serious challenges owing to various anthropogenic activities, especially to the Changjiang Estuary. Since the establishment of the Three Gorges Dam (TGD), currently the world's largest hydraulic structure, and certain other local hydraulic engineering structures, the Changjiang Estuary has experienced severe bathymetric variations. It is urgent to analyze the estuarine morphological response to the basin-wide disturbance to enable a better management of estuarine environments. North Channel (NC), the largest anabranched estuary in the Changjiang Estuary, is the focus of this study. Based on the analysis of bathymetric data between 1880 and 2013 and related hydrological data, we developed the first study on the centennial bathymetric variations of the NC. It is found that the bathymetric changes of NC include two main modes, with the first mode representing 64% of the NC variability, which indicates observable deposition in the mouth bar and its outer side area (lower reach); the second mode representing 11% of the NC variability, which further demonstrates channel deepening along the inner side of the mouth bar (upper reach) during 1970-2013. Further, recent erosion observed along the inner side of the mouth bar is caused by riverine sediment decrease, especially in relation to TGD induced sediment trapping since 2003, while the deposition along the lower reach since 2003 can be explained by the landward sediment transport because of flood-tide force strengthen under the joint action of TGD induced seasonal flood discharge decrease and land reclamation induced lower reach narrowing. Generally, the upper and lower NC reach are respectively dominated by fluvial and tidal discharge, however, episodic extreme floods can completely alter the channel morphology by smoothing the entire channel. The results presented herein for the NC enrich our understanding of bathymetric

  9. Automated mitosis detection using texture, SIFT features and HMAX biologically inspired approach.

    Science.gov (United States)

    Irshad, Humayun; Jalali, Sepehr; Roux, Ludovic; Racoceanu, Daniel; Hwee, Lim Joo; Naour, Gilles Le; Capron, Frédérique

    2013-01-01

    According to Nottingham grading system, mitosis count in breast cancer histopathology is one of three components required for cancer grading and prognosis. Manual counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. The aim is to investigate the various texture features and Hierarchical Model and X (HMAX) biologically inspired approach for mitosis detection using machine-learning techniques. We propose an approach that assists pathologists in automated mitosis detection and counting. The proposed method, which is based on the most favorable texture features combination, examines the separability between different channels of color space. Blue-ratio channel provides more discriminative information for mitosis detection in histopathological images. Co-occurrence features, run-length features, and Scale-invariant feature transform (SIFT) features were extracted and used in the classification of mitosis. Finally, a classification is performed to put the candidate patch either in the mitosis class or in the non-mitosis class. Three different classifiers have been evaluated: Decision tree, linear kernel Support Vector Machine (SVM), and non-linear kernel SVM. We also evaluate the performance of the proposed framework using the modified biologically inspired model of HMAX and compare the results with other feature extraction methods such as dense SIFT. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for an International Conference on Pattern Recognition (ICPR) 2012 contest. The proposed framework achieved 76% recall, 75% precision and 76% F-measure. Different frameworks for classification have been evaluated for mitosis detection. In future work, instead of regions, we intend to compute features on the results of mitosis contour segmentation and use them to improve detection and classification rate.

  10. Gamelan Music Onset Detection based on Spectral Features

    Directory of Open Access Journals (Sweden)

    Yoyon Kusnendar Suprapto

    2013-03-01

    Full Text Available This research detects onsets of percussive instruments by examining the performance on the sound signals of gamelan instruments as one of traditional music instruments in Indonesia. Onset plays important role in determining musical rythmic structure, like beat, tempo, and is highly required in many applications of music information retrieval. There are four onset detection methods compared that employ spectral features, such as magnitude, phase, and the combination of both, which are phase slope (PS, weighted phase deviation (WPD, spectral flux (SF, and rectified complex domain (RCD. These features are extracted by representing the sound signals into time-frequency domain using overlapped Short-time Fourier Transform (STFT and varying the window length. Onset detection functions are processed through peak-picking using dynamic threshold. The results showed that by using suitable window length and parameter setting of dynamic threshold, F-measure which is greater than 0.80 can be obtained for certain methods.

  11. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    OpenAIRE

    ÖZEL, Selma Ayşe; SARAÇ, Esra

    2016-01-01

    Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the exper...

  12. Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models

    International Nuclear Information System (INIS)

    Khalvati, Farzad; Wong, Alexander; Haider, Masoom A.

    2015-01-01

    Prostate cancer is the most common form of cancer and the second leading cause of cancer death in North America. Auto-detection of prostate cancer can play a major role in early detection of prostate cancer, which has a significant impact on patient survival rates. While multi-parametric magnetic resonance imaging (MP-MRI) has shown promise in diagnosis of prostate cancer, the existing auto-detection algorithms do not take advantage of abundance of data available in MP-MRI to improve detection accuracy. The goal of this research was to design a radiomics-based auto-detection method for prostate cancer via utilizing MP-MRI data. In this work, we present new MP-MRI texture feature models for radiomics-driven detection of prostate cancer. In addition to commonly used non-invasive imaging sequences in conventional MP-MRI, namely T2-weighted MRI (T2w) and diffusion-weighted imaging (DWI), our proposed MP-MRI texture feature models incorporate computed high-b DWI (CHB-DWI) and a new diffusion imaging modality called correlated diffusion imaging (CDI). Moreover, the proposed texture feature models incorporate features from individual b-value images. A comprehensive set of texture features was calculated for both the conventional MP-MRI and new MP-MRI texture feature models. We performed feature selection analysis for each individual modality and then combined best features from each modality to construct the optimized texture feature models. The performance of the proposed MP-MRI texture feature models was evaluated via leave-one-patient-out cross-validation using a support vector machine (SVM) classifier trained on 40,975 cancerous and healthy tissue samples obtained from real clinical MP-MRI datasets. The proposed MP-MRI texture feature models outperformed the conventional model (i.e., T2w+DWI) with regard to cancer detection accuracy. Comprehensive texture feature models were developed for improved radiomics-driven detection of prostate cancer using MP-MRI. Using a

  13. Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity

    Science.gov (United States)

    Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin

    2017-07-01

    Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.

  14. Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images

    Science.gov (United States)

    Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.

    2018-04-01

    A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.

  15. Swath bathymetric investigation of the seamounts located in the Laxmi Basin, eastern Arabian Sea

    Digital Repository Service at National Institute of Oceanography (India)

    Bhattacharya, G.C.; Murty, G.P.S.; Srinivas, K.; Chaubey, A.K.; Sudhakar, T.; Nair, R.R.

    Multibeam (hydrosweep) swath bathymetric investigations revealed the presence of a NNW trending linear seamount chain along the axial part of the Laxmi Basin in the eastern Arabian Sea, between 15~'N, 70~'15'E and 17~'20'N, 69~'E. This chain...

  16. INTEGRATION OF IMAGE-DERIVED AND POS-DERIVED FEATURES FOR IMAGE BLUR DETECTION

    Directory of Open Access Journals (Sweden)

    T.-A. Teo

    2016-06-01

    Full Text Available The image quality plays an important role for Unmanned Aerial Vehicle (UAV’s applications. The small fixed wings UAV is suffering from the image blur due to the crosswind and the turbulence. Position and Orientation System (POS, which provides the position and orientation information, is installed onto an UAV to enable acquisition of UAV trajectory. It can be used to calculate the positional and angular velocities when the camera shutter is open. This study proposes a POS-assisted method to detect the blur image. The major steps include feature extraction, blur image detection and verification. In feature extraction, this study extracts different features from images and POS. The image-derived features include mean and standard deviation of image gradient. For POS-derived features, we modify the traditional degree-of-linear-blur (blinear method to degree-of-motion-blur (bmotion based on the collinear condition equations and POS parameters. Besides, POS parameters such as positional and angular velocities are also adopted as POS-derived features. In blur detection, this study uses Support Vector Machines (SVM classifier and extracted features (i.e. image information, POS data, blinear and bmotion to separate blur and sharp UAV images. The experiment utilizes SenseFly eBee UAV system. The number of image is 129. In blur image detection, we use the proposed degree-of-motion-blur and other image features to classify the blur image and sharp images. The classification result shows that the overall accuracy using image features is only 56%. The integration of image-derived and POS-derived features have improved the overall accuracy from 56% to 76% in blur detection. Besides, this study indicates that the performance of the proposed degree-of-motion-blur is better than the traditional degree-of-linear-blur.

  17. Automated mitosis detection using texture, SIFT features and HMAX biologically inspired approach

    Directory of Open Access Journals (Sweden)

    Humayun Irshad

    2013-01-01

    Full Text Available Context: According to Nottingham grading system, mitosis count in breast cancer histopathology is one of three components required for cancer grading and prognosis. Manual counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. Aims: The aim is to investigate the various texture features and Hierarchical Model and X (HMAX biologically inspired approach for mitosis detection using machine-learning techniques. Materials and Methods: We propose an approach that assists pathologists in automated mitosis detection and counting. The proposed method, which is based on the most favorable texture features combination, examines the separability between different channels of color space. Blue-ratio channel provides more discriminative information for mitosis detection in histopathological images. Co-occurrence features, run-length features, and Scale-invariant feature transform (SIFT features were extracted and used in the classification of mitosis. Finally, a classification is performed to put the candidate patch either in the mitosis class or in the non-mitosis class. Three different classifiers have been evaluated: Decision tree, linear kernel Support Vector Machine (SVM, and non-linear kernel SVM. We also evaluate the performance of the proposed framework using the modified biologically inspired model of HMAX and compare the results with other feature extraction methods such as dense SIFT. Results: The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS dataset provided for an International Conference on Pattern Recognition (ICPR 2012 contest. The proposed framework achieved 76% recall, 75% precision and 76% F-measure. Conclusions: Different frameworks for classification have been evaluated for mitosis detection. In future work, instead of regions, we intend to compute features on the results of mitosis contour segmentation and use them to improve detection and

  18. Feature Detection of Curve Traffic Sign Image on The Bandung - Jakarta Highway

    Science.gov (United States)

    Naseer, M.; Supriadi, I.; Supangkat, S. H.

    2018-03-01

    Unsealed roadside and problems with the road surface are common causes of road crashes, particularly when those are combined with curves. Curve traffic sign is an important component for giving early warning to driver on traffic, especially on high-speed traffic like on the highway. Traffic sign detection has became a very interesting research now, and in this paper will be discussed about the detection of curve traffic sign. There are two types of curve signs are discussed, namely the curve turn to the left and the curve turn to the right and the all data sample used are the curves taken / recorded from some signs on the Bandung - Jakarta Highway. Feature detection of the curve signs use Speed Up Robust Feature (SURF) method, where the detected scene image is 800x450. From 45 curve turn to the right images, the system can detect the feature well to 35 images, where the success rate is 77,78%, while from the 45 curve turn to the left images, the system can detect the feature well to 34 images and the success rate is 75,56%, so the average accuracy in the detection process is 76,67%. While the average time for the detection process is 0.411 seconds.

  19. Personalized features for attention detection in children with Attention Deficit Hyperactivity Disorder.

    Science.gov (United States)

    Fahimi, Fatemeh; Guan, Cuntai; Wooi Boon Goh; Kai Keng Ang; Choon Guan Lim; Tih Shih Lee

    2017-07-01

    Measuring attention from electroencephalogram (EEG) has found applications in the treatment of Attention Deficit Hyperactivity Disorder (ADHD). It is of great interest to understand what features in EEG are most representative of attention. Intensive research has been done in the past and it has been proven that frequency band powers and their ratios are effective features in detecting attention. However, there are still unanswered questions, like, what features in EEG are most discriminative between attentive and non-attentive states? Are these features common among all subjects or are they subject-specific and must be optimized for each subject? Using Mutual Information (MI) to perform subject-specific feature selection on a large data set including 120 ADHD children, we found that besides theta beta ratio (TBR) which is commonly used in attention detection and neurofeedback, the relative beta power and theta/(alpha+beta) (TBAR) are also equally significant and informative for attention detection. Interestingly, we found that the relative theta power (which is also commonly used) may not have sufficient discriminative information itself (it is informative only for 3.26% of ADHD children). We have also demonstrated that although these features (relative beta power, TBR and TBAR) are the most important measures to detect attention on average, different subjects have different set of most discriminative features.

  20. Bathymetric surveys at highway bridges crossing the Missouri River in Kansas City, Missouri, using a multibeam echo sounder, 2010

    Science.gov (United States)

    Huizinga, Richard J.

    2010-01-01

    Bathymetric surveys were conducted by the U.S. Geological Survey, in cooperation with the Missouri Department of Transportation, on the Missouri River in the vicinity of nine bridges at seven highway crossings in Kansas City, Missouri, in March 2010. A multibeam echo sounder mapping system was used to obtain channel-bed elevations for river reaches that ranged from 1,640 to 1,800 feet long and extending from bank to bank in the main channel of the Missouri River. These bathymetric scans will be used by the Missouri Department of Transportation to assess the condition of the bridges for stability and integrity with respect to bridge scour. Bathymetric data were collected around every pier that was in water, except those at the edge of the water or in extremely shallow water, and one pier that was surrounded by a large debris raft. A scour hole was present at every pier for which bathymetric data could be obtained. The scour hole at a given pier varied in depth relative to the upstream channel bed, depending on the presence and proximity of other piers or structures upstream from the pier in question. The surveyed channel bed at the bottom of the scour hole was between 5 and 50 feet above bedrock. At bridges with drilled shaft foundations, generally there was exposure of the upstream end of the seal course and the seal course often was undermined to some extent. At one site, the minimum elevation of the scour hole at the main channel pier was about 10 feet below the bottom of the seal course, and the sides of the drilled shafts were evident in a point cloud visualization of the data at that pier. However, drilled shafts generally penetrated 20 feet into bedrock. Undermining of the seal course was evident as a sonic 'shadow' in the point cloud visualization of several of the piers. Large dune features were present in the channel at nearly all of the surveyed sites, as were numerous smaller dunes and many ripples. Several of the sites are on or near bends in the river

  1. A ROC-based feature selection method for computer-aided detection and diagnosis

    Science.gov (United States)

    Wang, Songyuan; Zhang, Guopeng; Liao, Qimei; Zhang, Junying; Jiao, Chun; Lu, Hongbing

    2014-03-01

    Image-based computer-aided detection and diagnosis (CAD) has been a very active research topic aiming to assist physicians to detect lesions and distinguish them from benign to malignant. However, the datasets fed into a classifier usually suffer from small number of samples, as well as significantly less samples available in one class (have a disease) than the other, resulting in the classifier's suboptimal performance. How to identifying the most characterizing features of the observed data for lesion detection is critical to improve the sensitivity and minimize false positives of a CAD system. In this study, we propose a novel feature selection method mR-FAST that combines the minimal-redundancymaximal relevance (mRMR) framework with a selection metric FAST (feature assessment by sliding thresholds) based on the area under a ROC curve (AUC) generated on optimal simple linear discriminants. With three feature datasets extracted from CAD systems for colon polyps and bladder cancer, we show that the space of candidate features selected by mR-FAST is more characterizing for lesion detection with higher AUC, enabling to find a compact subset of superior features at low cost.

  2. Using activity-related behavioural features towards more effective automatic stress detection.

    Directory of Open Access Journals (Sweden)

    Dimitris Giakoumis

    Full Text Available This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing.

  3. The estimation of sea floor dynamics from bathymetric surveys of a sand wave area

    NARCIS (Netherlands)

    Dorst, Leendert; Roos, Pieter C.; Hulscher, Suzanne J.M.H.; Lindenbergh, R.C.

    2009-01-01

    The analysis of series of offshore bathymetric surveys provides insight into the morphodynamics of the sea floor. This knowledge helps to improve resurvey policies for the maintenance of port approaches and nautical charting, and to validate morphodynamic models. We propose a method for such an

  4. Magnetic and bathymetric investigations over the Vema Region of the Central Indian Ridge: Tectonic implications

    Digital Repository Service at National Institute of Oceanography (India)

    Drolia, R.K.; Ghose, I.; Subrahmanyam, A.S.; Rao, M.M.M.; Kessarkar, P.M.; Murthy, K.S.R.

    Honeywell Elac narrowbeam echosounder. Post-cruise proces- sing involved digitisation of echograms, interpolation of data at 1 min intervals and merging of the magnetic field intensity data with the bathymetric data. Math- ew’s correction was applied...

  5. Feature learning and change feature classification based on deep learning for ternary change detection in SAR images

    Science.gov (United States)

    Gong, Maoguo; Yang, Hailun; Zhang, Puzhao

    2017-07-01

    Ternary change detection aims to detect changes and group the changes into positive change and negative change. It is of great significance in the joint interpretation of spatial-temporal synthetic aperture radar images. In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison. Firstly, sparse autoencoder is used to transform log-ratio difference image into a suitable feature space for extracting key changes and suppressing outliers and noise. And then the learned features are clustered into three classes, which are taken as the pseudo labels for training a CNN model as change feature classifier. The reliable training samples for CNN are selected from the feature maps learned by sparse autoencoder with certain selection rules. Having training samples and the corresponding pseudo labels, the CNN model can be trained by using back propagation with stochastic gradient descent. During its training procedure, CNN is driven to learn the concept of change, and more powerful model is established to distinguish different types of changes. Unlike the traditional methods, the proposed framework integrates the merits of sparse autoencoder and CNN to learn more robust difference representations and the concept of change for ternary change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed framework.

  6. Hemorrhage detection in MRI brain images using images features

    Science.gov (United States)

    Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela

    2013-11-01

    The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.

  7. DroidEnsemble: Detecting Android Malicious Applications with Ensemble of String and Structural Static Features

    KAUST Repository

    Wang, Wei

    2018-05-11

    Android platform has dominated the Operating System of mobile devices. However, the dramatic increase of Android malicious applications (malapps) has caused serious software failures to Android system and posed a great threat to users. The effective detection of Android malapps has thus become an emerging yet crucial issue. Characterizing the behaviors of Android applications (apps) is essential to detecting malapps. Most existing work on detecting Android malapps was mainly based on string static features such as permissions and API usage extracted from apps. There also exists work on the detection of Android malapps with structural features, such as Control Flow Graph (CFG) and Data Flow Graph (DFG). As Android malapps have become increasingly polymorphic and sophisticated, using only one type of static features may result in false negatives. In this work, we propose DroidEnsemble that takes advantages of both string features and structural features to systematically and comprehensively characterize the static behaviors of Android apps and thus build a more accurate detection model for the detection of Android malapps. We extract each app’s string features, including permissions, hardware features, filter intents, restricted API calls, used permissions, code patterns, as well as structural features like function call graph. We then use three machine learning algorithms, namely, Support Vector Machine (SVM), k-Nearest Neighbor (kNN) and Random Forest (RF), to evaluate the performance of these two types of features and of their ensemble. In the experiments, We evaluate our methods and models with 1386 benign apps and 1296 malapps. Extensive experimental results demonstrate the effectiveness of DroidEnsemble. It achieves the detection accuracy as 95.8% with only string features and as 90.68% with only structural features. DroidEnsemble reaches the detection accuracy as 98.4% with the ensemble of both types of features, reducing 9 false positives and 12 false

  8. Multi-Feature Based Multiple Landmine Detection Using Ground Penetration Radar

    Directory of Open Access Journals (Sweden)

    S. Park

    2014-06-01

    Full Text Available This paper presents a novel method for detection of multiple landmines using a ground penetrating radar (GPR. Conventional algorithms mainly focus on detection of a single landmine, which cannot linearly extend to the multiple landmine case. The proposed algorithm is composed of four steps; estimation of the number of multiple objects buried in the ground, isolation of each object, feature extraction and detection of landmines. The number of objects in the GPR signal is estimated by using the energy projection method. Then signals for the objects are extracted by using the symmetry filtering method. Each signal is then processed for features, which are given as input to the support vector machine (SVM for landmine detection. Three landmines buried in various ground conditions are considered for the test of the proposed method. They demonstrate that the proposed method can successfully detect multiple landmines.

  9. A systematic exploration of the micro-blog feature space for teens stress detection.

    Science.gov (United States)

    Zhao, Liang; Li, Qi; Xue, Yuanyuan; Jia, Jia; Feng, Ling

    2016-01-01

    In the modern stressful society, growing teenagers experience severe stress from different aspects from school to friends, from self-cognition to inter-personal relationship, which negatively influences their smooth and healthy development. Being timely and accurately aware of teenagers psychological stress and providing effective measures to help immature teenagers to cope with stress are highly valuable to both teenagers and human society. Previous work demonstrates the feasibility to sense teenagers' stress from their tweeting contents and context on the open social media platform-micro-blog. However, a tweet is still too short for teens to express their stressful status in a comprehensive way. Considering the topic continuity from the tweeting content to the follow-up comments and responses between the teenager and his/her friends, we combine the content of comments and responses under the tweet to supplement the tweet content. Also, such friends' caring comments like "what happened?", "Don't worry!", "Cheer up!", etc. provide hints to teenager's stressful status. Hence, in this paper, we propose to systematically explore the micro-blog feature space, comprised of four kinds of features [tweeting content features (FW), posting features (FP), interaction features (FI), and comment-response features (FC) between teenagers and friends] for teenager' stress category and stress level detection. We extract and analyze these feature values and their impacts on teens stress detection. We evaluate the framework through a real user study of 36 high school students aged 17. Different classifiers are employed to detect potential stress categories and corresponding stress levels. Experimental results show that all the features in the feature space positively affect stress detection, and linguistic negative emotion, proportion of negative sentences, friends' caring comments and teen's reply rate play more significant roles than the rest features. Micro-blog platform provides

  10. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters

    Science.gov (United States)

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-01-01

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments’ performance and survey accuracy. PMID:26729117

  11. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters.

    Science.gov (United States)

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-12-29

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments' performance and survey accuracy.

  12. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  13. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  14. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    Science.gov (United States)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  15. Infrared video based gas leak detection method using modified FAST features

    Science.gov (United States)

    Wang, Min; Hong, Hanyu; Huang, Likun

    2018-03-01

    In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.

  16. Exploiting Higher Order and Multi-modal Features for 3D Object Detection

    DEFF Research Database (Denmark)

    Kiforenko, Lilita

    that describe object visual appearance such as shape, colour, texture etc. This thesis focuses on robust object detection and pose estimation of rigid objects using 3D information. The thesis main contributions are novel feature descriptors together with object detection and pose estimation algorithms....... The initial work introduces a feature descriptor that uses edge categorisation in combination with a local multi-modal histogram descriptor in order to detect objects with little or no texture or surface variation. The comparison is performed with a state-of-the-art method, which is outperformed...... of the methods work well for one type of objects in a specific scenario, in another scenario or with different objects they might fail, therefore more robust solutions are required. The typical problem solution is the design of robust feature descriptors, where feature descriptors contain information...

  17. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  18. Combining Cluster Analysis and Small Unmanned Aerial Systems (sUAS) for Accurate and Low-cost Bathymetric Surveying

    Science.gov (United States)

    Maples, B. L.; Alvarez, L. V.; Moreno, H. A.; Chilson, P. B.; Segales, A.

    2017-12-01

    Given that classical in-situ direct surveying for geomorphological subsurface information in rivers is time-consuming, labor-intensive, costly, and often involves high-risk activities, it is obvious that non-intrusive technologies, like UAS-based, LIDAR-based remote sensing, have a promising potential and benefits in terms of efficient and accurate measurement of channel topography over large areas within a short time; therefore, a tremendous amount of attention has been paid to the development of these techniques. Over the past two decades, efforts have been undertaken to develop a specialized technique that can penetrate the water body and detect the channel bed to derive river and coastal bathymetry. In this research, we develop a low-cost effective technique for water body bathymetry. With the use of a sUAS and a light-weight sonar, the bathymetry and volume of a small reservoir have been surveyed. The sUAS surveying approach is conducted under low altitudes (2 meters from the water) using the sUAS to tow a small boat with the sonar attached. A cluster analysis is conducted to optimize the sUAS data collection and minimize the standard deviation created by under-sampling in areas of highly variable bathymetry, so measurements are densified in regions featured by steep slopes and drastic changes in the reservoir bed. This technique provides flexibility, efficiency, and free-risk to humans while obtaining high-quality information. The irregularly-spaced bathymetric survey is then interpolated using unstructured Triangular Irregular Network (TIN)-based maps to avoid re-gridding or re-sampling issues.

  19. Relevant test set using feature selection algorithm for early detection ...

    African Journals Online (AJOL)

    The objective of feature selection is to find the most relevant features for classification. Thus, the dimensionality of the information will be reduced and may improve classification's accuracy. This paper proposed a minimum set of relevant questions that can be used for early detection of dyslexia. In this research, we ...

  20. Recent Advances in Bathymetric Surveying of Continental Shelf Regions Using Autonomous Vehicles

    Science.gov (United States)

    Holland, K. T.; Calantoni, J.; Slocum, D.

    2016-02-01

    Obtaining bathymetric observations within the continental shelf in areas closer to the shore is often time consuming and dangerous, especially when uncharted shoals and rocks present safety concerns to survey ships and launches. However, surveys in these regions are critically important to numerical simulation of oceanographic processes, as bathymetry serves as the bottom boundary condition in operational forecasting models. We will present recent progress in bathymetric surveying using both traditional vessels retrofitted for autonomous operations and relatively inexpensive, small team deployable, Autonomous Underwater Vehicles (AUV). Both systems include either high-resolution multibeam echo sounders or interferometric sidescan sonar sensors with integrated inertial navigation system capabilities consistent with present commercial-grade survey operations. The advantages and limitations of these two configurations employing both unmanned and autonomous strategies are compared using results from several recent survey operations. We will demonstrate how sensor data collected from unmanned platforms can augment or even replace traditional data collection technologies. Oceanographic observations (e.g., sound speed, temperature and currents) collected simultaneously with bathymetry using autonomous technologies provide additional opportunities for advanced data assimilation in numerical forecasts. Discussion focuses on our vision for unmanned and autonomous systems working in conjunction with manned or in-situ systems to optimally and simultaneously collect data in environmentally hostile or difficult to reach areas.

  1. Object detection based on improved color and scale invariant features

    Science.gov (United States)

    Chen, Mengyang; Men, Aidong; Fan, Peng; Yang, Bo

    2009-10-01

    A novel object detection method which combines color and scale invariant features is presented in this paper. The detection system mainly adopts the widely used framework of SIFT (Scale Invariant Feature Transform), which consists of both a keypoint detector and descriptor. Although SIFT has some impressive advantages, it is not only computationally expensive, but also vulnerable to color images. To overcome these drawbacks, we employ the local color kernel histograms and Haar Wavelet Responses to enhance the descriptor's distinctiveness and computational efficiency. Extensive experimental evaluations show that the method has better robustness and lower computation costs.

  2. Bathymetric surveys at highway bridges crossing the Missouri and Mississippi Rivers near St. Louis, Missouri, 2010

    Science.gov (United States)

    Huizinga, Richard J.

    2011-01-01

    Bathymetric surveys were conducted by the U.S. Geological Survey, in cooperation with the Missouri Department of Transportation, on the Missouri and Mississippi Rivers in the vicinity of 12 bridges at 7 highway crossings near St. Louis, Missouri, in October 2010. A multibeam echo sounder mapping system was used to obtain channel-bed elevations for river reaches ranging from 3,280 to 4,590 feet long and extending across the active channel of the Missouri and Mississippi Rivers. These bathymetric scans provide a snapshot of the channel conditions at the time of the surveys and provide characteristics of scour holes that may be useful in the development of predictive guidelines or equations for scour holes. These data also may be used by the Missouri Department of Transportation to assess the bridges for stability and integrity issues with respect to bridge scour.

  3. Cue combination in a combined feature contrast detection and figure identification task.

    Science.gov (United States)

    Meinhardt, Günter; Persike, Malte; Mesenholl, Björn; Hagemann, Cordula

    2006-11-01

    Target figures defined by feature contrast in spatial frequency, orientation or both cues had to be detected in Gabor random fields and their shape had to be identified in a dual task paradigm. Performance improved with increasing feature contrast and was strongly correlated among both tasks. Subjects performed significantly better with combined cues than with single cues. The improvement due to cue summation was stronger than predicted by the assumption of independent feature specific mechanisms, and increased with the performance level achieved with single cues until it was limited by ceiling effects. Further, cue summation was also strongly correlated among tasks: when there was benefit due to the additional cue in feature contrast detection, there was also benefit in figure identification. For the same performance level achieved with single cues, cue summation was generally larger in figure identification than in feature contrast detection, indicating more benefit when processes of shape and surface formation are involved. Our results suggest that cue combination improves spatial form completion and figure-ground segregation in noisy environments, and therefore leads to more stable object vision.

  4. Improved Detection and Mapping of Deepwater Hydrocarbon Seeps: Optimizing Acquisition and Processing Parameters for Marine Seep Hunting

    Science.gov (United States)

    Mitchell, G. A.; Orange, D.; Gharib, J. J.; Saade, E. J.; Joye, S. B.

    2016-12-01

    Marine seep hunting surveys are a current focus of hydrocarbon exploration due to recent advances in offshore geophysical and geochemical technologies. Hydrocarbon seeps are ephemeral, small, discrete, and often difficult to sample on the deep seafloor. Low to mid-frequency multibeam echosounders (MBES) are an ideal exploration tool to remotely locate and map seafloor features associated with seepage. Geophysical signatures from hydrocarbon seeps are evident in bathymetric datasets (fluid expulsion features), seafloor backscatter datasets (carbonate outcrops, gassy sediments, methane hydrate deposits), and midwater backscatter datasets (gas bubble and oil droplet plumes). Interpretation of these geophysical seep signatures in backscatter datasets is a fundamental component in seep hunting. Degradation of backscatter datasets resulting from environmental, geometric, and system noise can interfere with the detection and delineation of seeps. We present a backscatter intensity normalization method and a 2X acquisition technique that can enhance the geologic resolvability within backscatter datasets and assist in interpretation and characterization of seeps. We use GC600 in the Northern Gulf of Mexico as a seep calibration site for a Kongsberg EM302 30 kHz MBES prior to the start of the Gigante seep hunting survey. We analyze the results of a backscatter intensity normalization, assess the effectiveness of 2X seafloor coverage in resolving geologic features in backscatter data, and determine off-nadir detection limits of bubble plumes. GC600's location and robust venting make it a natural laboratory in which to study natural hydrocarbon seepage. The site has been the focus of several near-seafloor surveys as well as in-situ studies using advanced deepwater technologies analyzing fluid flux and composition. These datasets allow for ground-truthing of our remote backscatter measurements prior to commencing exploration within the frontier regions of the Southern Gulf of

  5. Multibeam bathymetric, gravity and magnetic studies over 79 degrees E fracture zone, central Indian basin

    Digital Repository Service at National Institute of Oceanography (India)

    KameshRaju, K.A.; Ramprasad, T.; Kodagali, V.N.; Nair, R.R.

    A regional scale bathymetric map has been constructed for the 79 degrees E fracture zone (FZ) in the Central Indian Basin between 10 degrees 15'S and 14 degrees 45'S lat. and 78 degrees 55'E and 79 degrees 20'E long. using the high...

  6. Logic based feature detection on incore neutron spectra

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Kiss, S.; Bende-Farkas, S. (Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics)

    1993-04-01

    A general framework for detecting features of incore neutron spectra with a rule-based methodology is presented. As an example, we determine the meaningful peaks in the APSD-s. This work is part of a larger project, aimed at developing a noise diagnostic expert system. (Author).

  7. HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-02-01

    Full Text Available 3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  8. A modular CUDA-based framework for scale-space feature detection in video streams

    International Nuclear Information System (INIS)

    Kinsner, M; Capson, D; Spence, A

    2010-01-01

    Multi-scale image processing techniques enable extraction of features where the size of a feature is either unknown or changing, but the requirement to process image data at multiple scale levels imposes a substantial computational load. This paper describes the architecture and emerging results from the implementation of a GPGPU-accelerated scale-space feature detection framework for video processing. A discrete scale-space representation is generated for image frames within a video stream, and multi-scale feature detection metrics are applied to detect ridges and Gaussian blobs at video frame rates. A modular structure is adopted, in which common feature extraction tasks such as non-maximum suppression and local extrema search may be reused across a variety of feature detectors. Extraction of ridge and blob features is achieved at faster than 15 frames per second on video sequences from a machine vision system, utilizing an NVIDIA GTX 480 graphics card. By design, the framework is easily extended to additional feature classes through the inclusion of feature metrics to be applied to the scale-space representation, and using common post-processing modules to reduce the required CPU workload. The framework is scalable across multiple and more capable GPUs, and enables previously intractable image processing at video frame rates using commodity computational hardware.

  9. The ship edge feature detection based on high and low threshold for remote sensing image

    Science.gov (United States)

    Li, Xuan; Li, Shengyang

    2018-05-01

    In this paper, a method based on high and low threshold is proposed to detect the ship edge feature due to the low accuracy rate caused by the noise. Analyze the relationship between human vision system and the target features, and to determine the ship target by detecting the edge feature. Firstly, using the second-order differential method to enhance the quality of image; Secondly, to improvement the edge operator, we introduction of high and low threshold contrast to enhancement image edge and non-edge points, and the edge as the foreground image, non-edge as a background image using image segmentation to achieve edge detection, and remove the false edges; Finally, the edge features are described based on the result of edge features detection, and determine the ship target. The experimental results show that the proposed method can effectively reduce the number of false edges in edge detection, and has the high accuracy of remote sensing ship edge detection.

  10. Significance of MPEG-7 textural features for improved mass detection in mammography.

    Science.gov (United States)

    Eltonsy, Nevine H; Tourassi, Georgia D; Fadeev, Aleksey; Elmaghraby, Adel S

    2006-01-01

    The purpose of the study is to investigate the significance of MPEG-7 textural features for improving the detection of masses in screening mammograms. The detection scheme was originally based on morphological directional neighborhood features extracted from mammographic regions of interest (ROIs). Receiver Operating Characteristics (ROC) was performed to evaluate the performance of each set of features independently and merged into a back-propagation artificial neural network (BPANN) using the leave-one-out sampling scheme (LOOSS). The study was based on a database of 668 mammographic ROIs (340 depicting cancer regions and 328 depicting normal parenchyma). Overall, the ROC area index of the BPANN using the directional morphological features was Az=0.85+/-0.01. The MPEG-7 edge histogram descriptor-based BPNN showed an ROC area index of Az=0.71+/-0.01 while homogeneous textural descriptors using 30 and 120 channels helped the BPNN achieve similar ROC area indexes of Az=0.882+/-0.02 and Az=0.877+/-0.01 respectively. After merging the MPEG-7 homogeneous textural features with the directional neighborhood features the performance of the BPANN increased providing an ROC area index of Az=0.91+/-0.01. MPEG-7 homogeneous textural descriptor significantly improved the morphology-based detection scheme.

  11. Bathymetric patterns in standing stock and diversity of deep-sea nematodes at the long-term ecological research observatory HAUSGARTEN (Fram Strait)

    Science.gov (United States)

    Grzelak, Katarzyna; Kotwicki, Lech; Hasemann, Christiane; Soltwedel, Thomas

    2017-08-01

    Bathymetric patterns in standing stocks and diversity are a major topic of investigation in deep-sea biology. From the literature, responses of metazoan meiofauna and nematodes to bathymetric gradients are well studied, with a general decrease in biomass and abundance with increasing water depth, while bathymetric diversity gradients often, although it is not a rule, show a unimodal pattern. Spatial distribution patterns of nematode communities along bathymetric gradients are coupled with surface-water processes and interacting physical and biological factors within the benthic system. We studied the nematode communities at the Long-Term Ecological Research (LTER) observatory HAUSGARTEN, located in the Fram Strait at the Marginal Ice Zone, with respect to their standing stocks as well as structural and functional diversity. We evaluated whether nematode density, biomass and diversity indices, such as H0, Hinf, EG(50), Θ- 1, are linked with environmental conditions along a bathymetric transect spanning from 1200 m to 5500 m water depth. Nematode abundance, biomass and diversity, as well as food availability from phytodetritus sedimentation (indicated by chloroplastic pigments in the sediments), were higher at the stations located at upper bathyal depths (1200-2000 m) and tended to decrease with increasing water depth. A faunal shift was found below 3500 m water depth, where genus composition and trophic structure changed significantly and structural diversity indices markedly decreased. A strong dominance of very few genera and its high turnover particularly at the abyssal stations (4000-5500 m) suggests that environmental conditions were rather unfavorable for most genera. Despite the high concentrations of sediment-bound chloroplastic pigments and elevated standing stocks found at the deepest station (5500 m), nematode genus diversity remained the lowest compared to all other stations. This study provides a further insight into the knowledge of deep-sea nematodes

  12. The impact of signal normalization on seizure detection using line length features.

    Science.gov (United States)

    Logesparan, Lojini; Rodriguez-Villegas, Esther; Casson, Alexander J

    2015-10-01

    Accurate automated seizure detection remains a desirable but elusive target for many neural monitoring systems. While much attention has been given to the different feature extractions that can be used to highlight seizure activity in the EEG, very little formal attention has been given to the normalization that these features are routinely paired with. This normalization is essential in patient-independent algorithms to correct for broad-level differences in the EEG amplitude between people, and in patient-dependent algorithms to correct for amplitude variations over time. It is crucial, however, that the normalization used does not have a detrimental effect on the seizure detection process. This paper presents the first formal investigation into the impact of signal normalization techniques on seizure discrimination performance when using the line length feature to emphasize seizure activity. Comparing five normalization methods, based upon the mean, median, standard deviation, signal peak and signal range, we demonstrate differences in seizure detection accuracy (assessed as the area under a sensitivity-specificity ROC curve) of up to 52 %. This is despite the same analysis feature being used in all cases. Further, changes in performance of up to 22 % are present depending on whether the normalization is applied to the raw EEG itself or directly to the line length feature. Our results highlight the median decaying memory as the best current approach for providing normalization when using line length features, and they quantify the under-appreciated challenge of providing signal normalization that does not impair seizure detection algorithm performance.

  13. A review of feature detection and match algorithms for localization and mapping

    Science.gov (United States)

    Li, Shimiao

    2017-09-01

    Localization and mapping is an essential ability of a robot to keep track of its own location in an unknown environment. Among existing methods for this purpose, vision-based methods are more effective solutions for being accurate, inexpensive and versatile. Vision-based methods can generally be categorized as feature-based approaches and appearance-based approaches. The feature-based approaches prove higher performance in textured scenarios. However, their performance depend highly on the applied feature-detection algorithms. In this paper, we surveyed algorithms for feature detection, which is an essential step in achieving vision-based localization and mapping. In this pater, we present mathematical models of the algorithms one after another. To compare the performances of the algorithms, we conducted a series of experiments on their accuracy, speed, scale invariance and rotation invariance. The results of the experiments showed that ORB is the fastest algorithm in detecting and matching features, the speed of which is more than 10 times that of SURF and approximately 40 times that of SIFT. And SIFT, although with no advantage in terms of speed, shows the most correct matching pairs and proves its accuracy.

  14. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    Science.gov (United States)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  15. Face detection and facial feature localization using notch based templates

    International Nuclear Information System (INIS)

    Qayyum, U.

    2007-01-01

    We present a real time detection off aces from the video with facial feature localization as well as the algorithm capable of differentiating between the face/non-face patterns. The need of face detection and facial feature localization arises in various application of computer vision, so a lot of research is dedicated to come up with a real time solution. The algorithm should remain simple to perform real time whereas it should not compromise on the challenges encountered during the detection and localization phase, keeping simplicity and all challenges i.e. algorithm invariant to scale, translation, and (+-45) rotation transformations. The proposed system contains two parts. Visual guidance and face/non-face classification. The visual guidance phase uses the fusion of motion and color cues to classify skin color. Morphological operation with union-structure component labeling algorithm extracts contiguous regions. Scale normalization is applied by nearest neighbor interpolation method to avoid the effect of different scales. Using the aspect ratio of width and height size. Region of Interest (ROI) is obtained and then passed to face/non-face classifier. Notch (Gaussian) based templates/ filters are used to find circular darker regions in ROI. The classified face region is handed over to facial feature localization phase, which uses YCbCr eyes/lips mask for face feature localization. The empirical results show an accuracy of 90% for five different videos with 1000 face/non-face patterns and processing rate of proposed algorithm is 15 frames/sec. (author)

  16. Automatic detection of solar features in HSOS full-disk solar images using guided filter

    Science.gov (United States)

    Yuan, Fei; Lin, Jiaben; Guo, Jingjing; Wang, Gang; Tong, Liyue; Zhang, Xinwei; Wang, Bingxiang

    2018-02-01

    A procedure is introduced for the automatic detection of solar features using full-disk solar images from Huairou Solar Observing Station (HSOS), National Astronomical Observatories of China. In image preprocessing, median filter is applied to remove the noises. Guided filter is adopted to enhance the edges of solar features and restrain the solar limb darkening, which is first introduced into the astronomical target detection. Then specific features are detected by Otsu algorithm and further threshold processing technique. Compared with other automatic detection procedures, our procedure has some advantages such as real time and reliability as well as no need of local threshold. Also, it reduces the amount of computation largely, which is benefited from the efficient guided filter algorithm. The procedure has been tested on one month sequences (December 2013) of HSOS full-disk solar images and the result shows that the number of features detected by our procedure is well consistent with the manual one.

  17. Feature-fused SSD: fast detection for small objects

    Science.gov (United States)

    Cao, Guimei; Xie, Xuemei; Yang, Wenzhe; Liao, Quan; Shi, Guangming; Wu, Jinjian

    2018-04-01

    Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.

  18. A General Purpose Feature Extractor for Light Detection and Ranging Data

    Directory of Open Access Journals (Sweden)

    Edwin B. Olson

    2010-11-01

    Full Text Available Feature extraction is a central step of processing Light Detection and Ranging (LIDAR data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.

  19. A general purpose feature extractor for light detection and ranging data.

    Science.gov (United States)

    Li, Yangming; Olson, Edwin B

    2010-01-01

    Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.

  20. Breast Cancer Detection with Gabor Features from Digital Mammograms

    Directory of Open Access Journals (Sweden)

    Yufeng Zheng

    2010-01-01

    Full Text Available A new breast cancer detection algorithm, named the “Gabor Cancer Detection” (GCD algorithm, utilizing Gabor features is proposed. Three major steps are involved in the GCD algorithm, preprocessing, segmentation (generating alarm segments, and classification (reducing false alarms. In preprocessing, a digital mammogram is down-sampled, quantized, denoised and enhanced. Nonlinear diffusion is used for noise suppression. In segmentation, a band-pass filter is formed by rotating a 1-D Gaussian filter (off center in frequency space, termed as “Circular Gaussian Filter” (CGF. A CGF can be uniquely characterized by specifying a central frequency and a frequency band. A mass or calcification is a space-occupying lesion and usually appears as a bright region on a mammogram. The alarm segments (suspicious to be masses/calcifications can be extracted out using a threshold that is adaptively decided upon the histogram analysis of the CGF-filtered mammogram. In classification, a Gabor filter bank is formed with five bands by four orientations (horizontal, vertical, 45 and 135 degree in Fourier frequency domain. For each mammographic image, twenty Gabor-filtered images are produced. A set of edge histogram descriptors (EHD are then extracted from 20 Gabor images for classification. An EHD signature is computed with four orientations of Gabor images along each band and five EHD signatures are then joined together to form an EHD feature vector of 20 dimensions. With the EHD features, the fuzzy C-means clustering technique and k-nearest neighbor (KNN classifier are used to reduce the number of false alarms. The experimental results tested on the DDSM database (University of South Florida show the promises of GCD algorithm in breast cancer detection, which achieved TP (true positive rate = 90% at FPI (false positives per image = 1.21 in mass detection; and TP = 93% at FPI = 1.19 in calcification detection.

  1. Computing Adaptive Feature Weights with PSO to Improve Android Malware Detection

    Directory of Open Access Journals (Sweden)

    Yanping Xu

    2017-01-01

    Full Text Available Android malware detection is a complex and crucial issue. In this paper, we propose a malware detection model using a support vector machine (SVM method based on feature weights that are computed by information gain (IG and particle swarm optimization (PSO algorithms. The IG weights are evaluated based on the relevance between features and class labels, and the PSO weights are adaptively calculated to result in the best fitness (the performance of the SVM classification model. Moreover, to overcome the defects of basic PSO, we propose a new adaptive inertia weight method called fitness-based and chaotic adaptive inertia weight-PSO (FCAIW-PSO that improves on basic PSO and is based on the fitness and a chaotic term. The goal is to assign suitable weights to the features to ensure the best Android malware detection performance. The results of experiments indicate that the IG weights and PSO weights both improve the performance of SVM and that the performance of the PSO weights is better than that of the IG weights.

  2. Metabolic costs imposed by hydrostatic pressure constrain bathymetric range in the lithodid crab Lithodes maja.

    Science.gov (United States)

    Brown, Alastair; Thatje, Sven; Morris, James P; Oliphant, Andrew; Morgan, Elizabeth A; Hauton, Chris; Jones, Daniel O B; Pond, David W

    2017-11-01

    The changing climate is shifting the distributions of marine species, yet the potential for shifts in depth distributions is virtually unexplored. Hydrostatic pressure is proposed to contribute to a physiological bottleneck constraining depth range extension in shallow-water taxa. However, bathymetric limitation by hydrostatic pressure remains undemonstrated, and the mechanism limiting hyperbaric tolerance remains hypothetical. Here, we assess the effects of hydrostatic pressure in the lithodid crab Lithodes maja (bathymetric range 4-790 m depth, approximately equivalent to 0.1 to 7.9 MPa hydrostatic pressure). Heart rate decreased with increasing hydrostatic pressure, and was significantly lower at ≥10.0 MPa than at 0.1 MPa. Oxygen consumption increased with increasing hydrostatic pressure to 12.5 MPa, before decreasing as hydrostatic pressure increased to 20.0 MPa; oxygen consumption was significantly higher at 7.5-17.5 MPa than at 0.1 MPa. Increases in expression of genes associated with neurotransmission, metabolism and stress were observed between 7.5 and 12.5 MPa. We suggest that hyperbaric tolerance in L maja may be oxygen-limited by hyperbaric effects on heart rate and metabolic rate, but that L maja 's bathymetric range is limited by metabolic costs imposed by the effects of high hydrostatic pressure. These results advocate including hydrostatic pressure in a complex model of environmental tolerance, where energy limitation constrains biogeographic range, and facilitate the incorporation of hydrostatic pressure into the broader metabolic framework for ecology and evolution. Such an approach is crucial for accurately projecting biogeographic responses to changing climate, and for understanding the ecology and evolution of life at depth. © 2017. Published by The Company of Biologists Ltd.

  3. MixDroid: A multi-features and multi-classifiers bagging system for Android malware detection

    Science.gov (United States)

    Huang, Weiqing; Hou, Erhang; Zheng, Liang; Feng, Weimiao

    2018-05-01

    In the past decade, Android platform has rapidly taken over the mobile market for its superior convenience and open source characteristics. However, with the popularity of Android, malwares targeting on Android devices are increasing rapidly, while the conventional rule-based and expert-experienced approaches are no longer able to handle such explosive growth. In this paper, combining with the theory of natural language processing and machine learning, we not only implement the basic feature extraction of permission application features, but also propose two innovative schemes of feature extraction: Dalvik opcode features and malicious code image, and implement an automatic Android malware detection system MixDroid which is based on multi-features and multi-classifiers. According to our experiment results on 20,000 Android applications, detection accuracy of MixDroid is 98.1%, which proves our schemes' effectiveness in Android malware detection.

  4. Learning Rich Features from RGB-D Images for Object Detection and Segmentation

    OpenAIRE

    Gupta, Saurabh; Girshick, Ross; Arbeláez, Pablo; Malik, Jitendra

    2014-01-01

    In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an av...

  5. Non-contact feature detection using ultrasonic Lamb waves

    Science.gov (United States)

    Sinha, Dipen N [Los Alamos, NM

    2011-06-28

    Apparatus and method for non-contact ultrasonic detection of features on or within the walls of hollow pipes are described. An air-coupled, high-power ultrasonic transducer for generating guided waves in the pipe wall, and a high-sensitivity, air-coupled transducer for detecting these waves, are disposed at a distance apart and at chosen angle with respect to the surface of the pipe, either inside of or outside of the pipe. Measurements may be made in reflection or transmission modes depending on the relative position of the transducers and the pipe. Data are taken by sweeping the frequency of the incident ultrasonic waves, using a tracking narrow-band filter to reduce detected noise, and transforming the frequency domain data into the time domain using fast Fourier transformation, if required.

  6. Exploration of available feature detection and identification systems and their performance on radiographs

    Science.gov (United States)

    Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.

    2016-10-01

    Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.

  7. Structural interpretation of the Konkan basin, southwestern continental margin of India, based on magnetic and bathymetric data

    Digital Repository Service at National Institute of Oceanography (India)

    Subrahmanyam, V.; Krishna, K.S.; Murty, G.P.S.; Rao, D.G.; Ramana, M.V.; Rao, M.G.

    Magnetic and bathymetric studies on the Konkan basin of the southwestern continental margin of India reveal prominent NNW-SSE, NW-SE, ENE-WSW, and WNE-ESE structural trends. The crystalline basement occurs at about 5-6 km below the mean sea level. A...

  8. An Improved Semisupervised Outlier Detection Algorithm Based on Adaptive Feature Weighted Clustering

    Directory of Open Access Journals (Sweden)

    Tingquan Deng

    2016-01-01

    Full Text Available There exist already various approaches to outlier detection, in which semisupervised methods achieve encouraging superiority due to the introduction of prior knowledge. In this paper, an adaptive feature weighted clustering-based semisupervised outlier detection strategy is proposed. This method maximizes the membership degree of a labeled normal object to the cluster it belongs to and minimizes the membership degrees of a labeled outlier to all clusters. In consideration of distinct significance of features or components in a dataset in determining an object being an inlier or outlier, each feature is adaptively assigned different weights according to the deviation degrees between this feature of all objects and that of a certain cluster prototype. A series of experiments on a synthetic dataset and several real-world datasets are implemented to verify the effectiveness and efficiency of the proposal.

  9. A Research on Fast Face Feature Points Detection on Smart Mobile Devices

    Directory of Open Access Journals (Sweden)

    Xiaohe Li

    2018-01-01

    Full Text Available We explore how to leverage the performance of face feature points detection on mobile terminals from 3 aspects. First, we optimize the models used in SDM algorithms via PCA and Spectrum Clustering. Second, we propose an evaluation criterion using Linear Discriminative Analysis to choose the best local feature descriptions which plays a critical role in feature points detection. Third, we take advantage of multicore architecture of mobile terminal and parallelize the optimized SDM algorithm to improve the efficiency further. The experiment observations show that our final accomplished GPC-SDM (improved Supervised Descent Method using spectrum clustering, PCA, and GPU acceleration suppresses the memory usage, which is beneficial and efficient to meet the real-time requirements.

  10. NOAA TIFF Image - 4m Bathymetric Depth of Red Snapper Research Areas in the South Atlantic Bight, 2010

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains unified Bathymetric Depth GeoTiffs with 4x4 meter cell resolution describing the topography of 15 areas along the shelf edge off the South...

  11. NOAA TIFF Image - 4m Bathymetric Curvature of Red Snapper Research Areas in the South Atlantic Bight, 2010

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains unified Bathymetric Curvature GeoTiffs with 4x4 meter cell resolution describing the topography of 15 areas along the shelf edge off the South...

  12. Multispectral image feature fusion for detecting land mines

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G.A.; Fields, D.J.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-11-15

    Our system fuses information contained in registered images from multiple sensors to reduce the effect of clutter and improve the the ability to detect surface and buried land mines. The sensor suite currently consists if a camera that acquires images in sixible wavelength bands, du, dual-band infrared (5 micron and 10 micron) and ground penetrating radar. Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a variety of physical properties that are more separate in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, holes made by animals and natural processes, etc.) and some artifacts.

  13. Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening.

    Science.gov (United States)

    Seoud, Lama; Hurtut, Thomas; Chelbi, Jihed; Cheriet, Farida; Langlois, J M Pierre

    2016-04-01

    The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.

  14. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  15. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    Science.gov (United States)

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  16. Attention in the processing of complex visual displays: detecting features and their combinations.

    Science.gov (United States)

    Farell, B

    1984-02-01

    The distinction between operations in visual processing that are parallel and preattentive and those that are serial and attentional receives both theoretical and empirical support. According to Treisman's feature-integration theory, independent features are available preattentively, but attention is required to veridically combine features into objects. Certain evidence supporting this theory is consistent with a different interpretation, which was tested in four experiments. The first experiment compared the detection of features and feature combinations while eliminating a factor that confounded earlier comparisons. The resulting priority of access to combinatorial information suggests that features and nonlocal combinations of features are not connected solely by a bottom-up hierarchical convergence. Causes of the disparity between the results of Experiment 1 and the results of previous research were investigated in three subsequent experiments. The results showed that of the two confounded factors, it was the difference in the mapping of alternatives onto responses, not the differing attentional demands of features and objects, that underlaid the results of the previous research. The present results are thus counterexamples to the feature-integration theory. Aspects of this theory are shown to be subsumed by more general principles, which are discussed in terms of attentional processes in the detection of features, objects, and stimulus alternatives.

  17. Matching-range-constrained real-time loop closure detection with CNNs features.

    Science.gov (United States)

    Bai, Dongdong; Wang, Chaoqun; Zhang, Bo; Yi, Xiaodong; Tang, Yuhua

    2016-01-01

    The loop closure detection (LCD) is an essential part of visual simultaneous localization and mapping systems (SLAM). LCD is capable of identifying and compensating the accumulation drift of localization algorithms to produce an consistent map if the loops are checked correctly. Deep convolutional neural networks (CNNs) have outperformed state-of-the-art solutions that use traditional hand-crafted features in many computer vision and pattern recognition applications. After the great success of CNNs, there has been much interest in applying CNNs features to robotic fields such as visual LCD. Some researchers focus on using a pre-trained CNNs model as a method of generating an image representation appropriate for visual loop closure detection in SLAM. However, there are many fundamental differences and challenges involved in character between simple computer vision applications and robotic applications. Firstly, the adjacent images in the dataset of loop closure detection might have more resemblance than the images that form the loop closure. Secondly, real-time performance is one of the most critical demands for robots. In this paper, we focus on making use of the feature generated by CNNs layers to implement LCD in real environment. In order to address the above challenges, we explicitly provide a value to limit the matching range of images to solve the first problem; meanwhile we get better results than state-of-the-art methods and improve the real-time performance using an efficient feature compression method.

  18. Digital mammography: Mixed feature neural network with spectral entropy decision for detection of microcalcifications

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, B. [Univ. of South Florida, Tampa, FL (United States)]|[Nanjing Univ. of Posts and Telecommunications (China). Dept. of Telecommunication Engineering; Qian, W.; Clarke, L.P. [Univ. of South Florida, Tampa, FL (United States)

    1996-10-01

    A computationally efficient mixed feature based neural network (MFNN) is proposed for the detection of microcalcification clusters (MCC`s) in digitized mammograms. The MFNN employs features computed in both the spatial and spectral domain and uses spectral entropy as a decision parameter. Backpropagation with Kalman Filtering (KF) is employed to allow more efficient network training as required for evaluation of different features, input images, and related error analysis. A previously reported, wavelet-based image-enhancement method is also employed to enhance microcalcification clusters for improved detection. The relative performance of the MFNN for both the raw and enhanced images is evaluated using a common image database of 30 digitized mammograms, with 20 images containing 21 biopsy proven MCC`s and ten normal cases. The computed sensitivity (true positive (TP) detection rate) was 90.1% with an average low false positive (FP) detection of 0.71 MCCs/image for the enhanced images using a modified k-fold validation error estimation technique. The corresponding computed sensitivity for the raw images was reduced to 81.4% and with 0.59 FP`s MCCs/image. A relative comparison to an earlier neural network (NN) design, using only spatially related features, suggests the importance of the addition of spectral domain features when the raw image data are analyzed.

  19. Digital mammography: Mixed feature neural network with spectral entropy decision for detection of microcalcifications

    International Nuclear Information System (INIS)

    Zheng, B.

    1996-01-01

    A computationally efficient mixed feature based neural network (MFNN) is proposed for the detection of microcalcification clusters (MCC's) in digitized mammograms. The MFNN employs features computed in both the spatial and spectral domain and uses spectral entropy as a decision parameter. Backpropagation with Kalman Filtering (KF) is employed to allow more efficient network training as required for evaluation of different features, input images, and related error analysis. A previously reported, wavelet-based image-enhancement method is also employed to enhance microcalcification clusters for improved detection. The relative performance of the MFNN for both the raw and enhanced images is evaluated using a common image database of 30 digitized mammograms, with 20 images containing 21 biopsy proven MCC's and ten normal cases. The computed sensitivity (true positive (TP) detection rate) was 90.1% with an average low false positive (FP) detection of 0.71 MCCs/image for the enhanced images using a modified k-fold validation error estimation technique. The corresponding computed sensitivity for the raw images was reduced to 81.4% and with 0.59 FP's MCCs/image. A relative comparison to an earlier neural network (NN) design, using only spatially related features, suggests the importance of the addition of spectral domain features when the raw image data are analyzed

  20. AtlantOS WP2, Enhancement of ship-based observing networks - Bathymetric integration and visualization of Europe's data holdings

    Science.gov (United States)

    Wölfl, Anne-Cathrin; Devey, Colin; Augustin, Nico

    2017-04-01

    The European Horizon 2020 research and innovation project AtlantOS - Optimising and Enhancing the Integrated Atlantic Ocean Observing Systems - aims to improve the present-day ocean observing activities in the Atlantic Ocean by establishing a sustainable, efficient and integrated Atlantic Ocean Observing System. 62 partners from 18 countries are working on solutions I) to improve international collaboration in the design, implementation and benefit sharing of ocean observing, II) to promote engagement and innovation in all aspects of ocean observing, III) to facilitate free and open access to ocean data and information, IV) to enable and disseminate methods of achieving quality and authority of ocean information, V) to strengthen the Global Ocean Observing System (GOOS) and to sustain observing systems that are critical for the Copernicus Marine Environment Monitoring Service and its applications and VI) to contribute to the aims of the Galway Statement on Atlantic Ocean Cooperation. The Work Package 2 of the AtlantOS project focuses on improving, expanding, integrating and innovating ship-based observations. One of the tasks is the provision of Europe's existing and future bathymetric data sets from the Atlantic Ocean in accessible formats enabling easy processing and visualization for stakeholders. Furthermore, a new concept has recently been implemented, where three large German research vessels continuously collect bathymetric data during their transits. All data sets are gathered and processed with the help of national data centers and partner institutions and integrated into existing open access data systems, such as Pangaea in Germany, EMODnet at European level and GMRT (Global Multi-Resolution Topography synthesis) at international level. The processed data will be linked to the original data holdings, that can easily be accessed if required. The overall aim of this task is to make bathymetric data publicly available for specialists and non-specialists both

  1. Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods

    Directory of Open Access Journals (Sweden)

    Huan Ni

    2016-09-01

    Full Text Available This paper presents an automated and effective method for detecting 3D edges and tracing feature lines from 3D-point clouds. This method is named Analysis of Geometric Properties of Neighborhoods (AGPN, and it includes two main steps: edge detection and feature line tracing. In the edge detection step, AGPN analyzes geometric properties of each query point’s neighborhood, and then combines RANdom SAmple Consensus (RANSAC and angular gap metric to detect edges. In the feature line tracing step, feature lines are traced by a hybrid method based on region growing and model fitting in the detected edges. Our approach is experimentally validated on complex man-made objects and large-scale urban scenes with millions of points. Comparative studies with state-of-the-art methods demonstrate that our method obtains a promising, reliable, and high performance in detecting edges and tracing feature lines in 3D-point clouds. Moreover, AGPN is insensitive to the point density of the input data.

  2. Face detection on distorted images using perceptual quality-aware features

    Science.gov (United States)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  3. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Burghouts, G.J.; Eendebak, P.T.; Huis, J.R. van; Dijk, J.; Rest, J.H.C. van

    2014-01-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature

  4. NOAA TIFF Image - 4m Bathymetric Depth Range of Red Snapper Research Areas in the South Atlantic Bight, 2010

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains unified Bathymetric Depth Range GeoTiffs with 4x4 meter cell resolution describing the topography of 15 areas along the shelf edge off the...

  5. Feature selection and classifier parameters estimation for EEG signals peak detection using particle swarm optimization.

    Science.gov (United States)

    Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.

  6. NOAA TIFF Image - 4m Bathymetric Mean Depth of Red Snapper Research Areas in the South Atlantic Bight, 2010

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains unified Bathymetric Mean Depth GeoTiffs with 4x4 meter cell resolution describing the topography of 15 areas along the shelf edge off the South...

  7. Combining heterogeneous features for colonic polyp detection in CTC based on semi-definite programming

    Science.gov (United States)

    Wang, Shijun; Yao, Jianhua; Petrick, Nicholas A.; Summers, Ronald M.

    2009-02-01

    Colon cancer is the second leading cause of cancer-related deaths in the United States. Computed tomographic colonography (CTC) combined with a computer aided detection system provides a feasible combination for improving colonic polyps detection and increasing the use of CTC for colon cancer screening. To distinguish true polyps from false positives, various features extracted from polyp candidates have been proposed. Most of these features try to capture the shape information of polyp candidates or neighborhood knowledge about the surrounding structures (fold, colon wall, etc.). In this paper, we propose a new set of shape descriptors for polyp candidates based on statistical curvature information. These features, called histogram of curvature features, are rotation, translation and scale invariant and can be treated as complementing our existing feature set. Then in order to make full use of the traditional features (defined as group A) and the new features (group B) which are highly heterogeneous, we employed a multiple kernel learning method based on semi-definite programming to identify an optimized classification kernel based on the combined set of features. We did leave-one-patient-out test on a CTC dataset which contained scans from 50 patients (with 90 6-9mm polyp detections). Experimental results show that a support vector machine (SVM) based on the combined feature set and the semi-definite optimization kernel achieved higher FROC performance compared to SVMs using the two groups of features separately. At a false positive per patient rate of 7, the sensitivity on 6-9mm polyps using the combined features improved from 0.78 (Group A) and 0.73 (Group B) to 0.82 (p<=0.01).

  8. A COMPARATIVE ANALYSIS OF SINGLE AND COMBINATION FEATURE EXTRACTION TECHNIQUES FOR DETECTING CERVICAL CANCER LESIONS

    Directory of Open Access Journals (Sweden)

    S. Pradeep Kumar Kenny

    2016-02-01

    Full Text Available Cervical cancer is the third most common form of cancer affecting women especially in third world countries. The predominant reason for such alarming rate of death is primarily due to lack of awareness and proper health care. As they say, prevention is better than cure, a better strategy has to be put in place to screen a large number of women so that an early diagnosis can help in saving their lives. One such strategy is to implement an automated system. For an automated system to function properly a proper set of features have to be extracted so that the cancer cell can be detected efficiently. In this paper we compare the performances of detecting a cancer cell using a single feature versus a combination feature set technique to see which will suit the automated system in terms of higher detection rate. For this each cell is segmented using multiscale morphological watershed segmentation technique and a series of features are extracted. This process is performed on 967 images and the data extracted is subjected to data mining techniques to determine which feature is best for which stage of cancer. The results thus obtained clearly show a higher percentage of success for combination feature set with 100% accurate detection rate.

  9. The gas-hydrate-related seabed features in the Palm Ridge off southwest Taiwan

    Science.gov (United States)

    Su, Zheng-Wei; Hsu, Shu-Kun; Tsai, Ching-Hui; Chen, Song-Chuen; Lin, Hsiao-Shan

    2016-04-01

    The offshore area of the SW Taiwan is located in the convergence zone between the northern continental margin of the South China Sea and the Manila subduction complex. Our study area, the Palm Ridge, is located in the passive continental margin. According to the geophysical, geochemical and geothermal data, abundant gas hydrate may exist in the offshore area of SW Taiwan. In this study, we will study the relation between the seabed features and the gas hydrate formation of the Palm Ridge. The data used in this study include high-resolution sidescan sonar images, sub-bottom profiles, echo sounder system, multi-beam bathymetric data, multi-channel reflection seismic and submarine photography in the Palm Ridge. Our results show the existing authigenic carbonates, gas seepages and gas plumes are mainly distributed in the bathymetric high of the Palm Ridge. Numerous submarine landslides have occurred in the place where the BSR distribution is not continuous. We suggest that it may be because of rapid slope failure, causing the change of the gas hydrate stability zone. We also found several faults on the R3.1 anticline structure east of the deformation front. These features imply that abundant deep methane gases have migrated to shallow strata, causing submarine landslides or collapse. The detailed relationship of gas migration and submarine landslides need further studies.

  10. Bathymetric highs in the mid-slope region of the western continental margin of India - Structure and mode of origin

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, D.G.; Paropkari, A.L.; Krishna, K.S.; Chaubey, A.K.; Ajay, K.K.; Kodagali, V.N.

    Analysis of the multi- and single beam bathymetric, seismic, magnetic and free-air gravity (ship-borne and satellite derived) data from the western continental margin of India between 12 degrees 40 minutes N and 15 degrees N had revealed...

  11. Fabric defect detection based on visual saliency using deep feature and low-rank recovery

    Science.gov (United States)

    Liu, Zhoufeng; Wang, Baorui; Li, Chunlei; Li, Bicao; Dong, Yan

    2018-04-01

    Fabric defect detection plays an important role in improving the quality of fabric product. In this paper, a novel fabric defect detection method based on visual saliency using deep feature and low-rank recovery was proposed. First, unsupervised training is carried out by the initial network parameters based on MNIST large datasets. The supervised fine-tuning of fabric image library based on Convolutional Neural Networks (CNNs) is implemented, and then more accurate deep neural network model is generated. Second, the fabric images are uniformly divided into the image block with the same size, then we extract their multi-layer deep features using the trained deep network. Thereafter, all the extracted features are concentrated into a feature matrix. Third, low-rank matrix recovery is adopted to divide the feature matrix into the low-rank matrix which indicates the background and the sparse matrix which indicates the salient defect. In the end, the iterative optimal threshold segmentation algorithm is utilized to segment the saliency maps generated by the sparse matrix to locate the fabric defect area. Experimental results demonstrate that the feature extracted by CNN is more suitable for characterizing the fabric texture than the traditional LBP, HOG and other hand-crafted features extraction method, and the proposed method can accurately detect the defect regions of various fabric defects, even for the image with complex texture.

  12. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  13. Detection of Vandalism in Wikipedia using Metadata Features – Implementation in Simple English and Albanian sections

    Directory of Open Access Journals (Sweden)

    Arsim Susuri

    2017-03-01

    Full Text Available In this paper, we evaluate a list of classifiers in order to use them in the detection of vandalism by focusing on metadata features. Our work is focused on two low resource data sets (Simple English and Albanian from Wikipedia. The aim of this research is to prove that this form of vandalism detection applied in one data set (language can be extended into another data set (language. Article views data sets in Wikipedia have been used rarely for the purpose of detecting vandalism. We will show the benefits of using article views data set with features from the article revisions data set with the aim of improving the detection of vandalism. The key advantage of using metadata features is that these metadata features are language independent and simple to extract because they require minimal processing. This paper shows that application of vandalism models across low resource languages is possible, and vandalism can be detected through view patterns of articles.

  14. The effect of destination linked feature selection in real-time network intrusion detection

    CSIR Research Space (South Africa)

    Mzila, P

    2013-07-01

    Full Text Available techniques in the network intrusion detection system (NIDS) is the feature selection technique. The ability of NIDS to accurately identify intrusion from the network traffic relies heavily on feature selection, which describes the pattern of the network...

  15. Estuarine Bathymetric Digital Elevation Models (30 meter and 3 arc second resolution) Derived From Source Hydrographic Survey Soundings Collected by NOAA

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These Bathymetric Digital Elevation Models (DEM) were generated from original point soundings collected during hydrographic surveys conducted by the National Ocean...

  16. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...... even in high-dimensional space. In addition, the latent connection between Rényi quadratic entropy and the mapping data in kernel feature space further facilitates us to capture the geometric structure as well as the information about the underlying labels of the CKD using CSQMI. Thus the resulting...... codebook and reduced CKD are discriminative. We report superior performance of our algorithm for object recognition on benchmark datasets like Caltech-101 and CIFAR-10, as well as for detection on a challenging chicken feet dataset....

  17. Digital Image Forgery Detection Using JPEG Features and Local Noise Discrepancies

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Wide availability of image processing software makes counterfeiting become an easy and low-cost way to distort or conceal facts. Driven by great needs for valid forensic technique, many methods have been proposed to expose such forgeries. In this paper, we proposed an integrated algorithm which was able to detect two commonly used fraud practices: copy-move and splicing forgery in digital picture. To achieve this target, a special descriptor for each block was created combining the feature from JPEG block artificial grid with that from noise estimation. And forehand image quality assessment procedure reconciled these different features by setting proper weights. Experimental results showed that, compared to existing algorithms, our proposed method is effective on detecting both copy-move and splicing forgery regardless of JPEG compression ratio of the input image.

  18. Automatic detection of diabetic retinopathy features in ultra-wide field retinal images

    Science.gov (United States)

    Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur

    2017-03-01

    Diabetic retinopathy (DR) is a major cause of irreversible vision loss. DR screening relies on retinal clinical signs (features). Opportunities for computer-aided DR feature detection have emerged with the development of Ultra-WideField (UWF) digital scanning laser technology. UWF imaging covers 82% greater retinal area (200°), against 45° in conventional cameras3 , allowing more clinically relevant retinopathy to be detected4 . UWF images also provide a high resolution of 3078 x 2702 pixels. Currently DR screening uses 7 overlapping conventional fundus images, and the UWF images provide similar results1,4. However, in 40% of cases, more retinopathy was found outside the 7-field ETDRS) fields by UWF and in 10% of cases, retinopathy was reclassified as more severe4 . This is because UWF imaging allows examination of both the central retina and more peripheral regions, with the latter implicated in DR6 . We have developed an algorithm for automatic recognition of DR features, including bright (cotton wool spots and exudates) and dark lesions (microaneurysms and blot, dot and flame haemorrhages) in UWF images. The algorithm extracts features from grayscale (green "red-free" laser light) and colour-composite UWF images, including intensity, Histogram-of-Gradient and Local binary patterns. Pixel-based classification is performed with three different classifiers. The main contribution is the automatic detection of DR features in the peripheral retina. The method is evaluated by leave-one-out cross-validation on 25 UWF retinal images with 167 bright lesions, and 61 other images with 1089 dark lesions. The SVM classifier performs best with AUC of 94.4% / 95.31% for bright / dark lesions.

  19. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; van Huis, Jasper R.; Dijk, Judith; van Rest, Jeroen H. C.

    2014-10-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature computation and pickpocket recognition. This is challenging because the environment is crowded, people move freely through areas which cannot be covered by a single camera, because the actual snatch is a subtle action, and because collaboration is complex social behavior. We carried out an experiment with more than 20 validated pickpocket incidents. We used a top-down approach to translate expert knowledge in features and rules, and a bottom-up approach to learn discriminating patterns with a classifier. The classifier was used to separate the pickpockets from normal passers-by who are shopping in the mall. We performed a cross validation to train and evaluate our system. In this paper, we describe our method, identify the most valuable features, and analyze the results that were obtained in the experiment. We estimate the quality of these features and the performance of automatic detection of (collaborating) pickpockets. The results show that many of the pickpockets can be detected at a low false alarm rate.

  20. Obscenity detection using haar-like features and Gentle Adaboost classifier.

    Science.gov (United States)

    Mustafa, Rashed; Min, Yang; Zhu, Dingju

    2014-01-01

    Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB) haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier.

  1. Obscenity Detection Using Haar-Like Features and Gentle Adaboost Classifier

    Directory of Open Access Journals (Sweden)

    Rashed Mustafa

    2014-01-01

    Full Text Available Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier.

  2. A new feature constituting approach to detection of vocal fold pathology

    Science.gov (United States)

    Hariharan, M.; Polat, Kemal; Yaacob, Sazali

    2014-08-01

    In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.

  3. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  4. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  5. A FRAMEWORK OF CHANGE DETECTION BASED ON COMBINED MORPHOLOGICA FEATURES AND MULTI-INDEX CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Li

    2017-09-01

    Full Text Available Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI, the differential water index (NDWI are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.

  6. a Framework of Change Detection Based on Combined Morphologica Features and Multi-Index Classification

    Science.gov (United States)

    Li, S.; Zhang, S.; Yang, D.

    2017-09-01

    Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.

  7. Attentional effects on preattentive vision: Spatial precues affect the detection of simple features

    NARCIS (Netherlands)

    Theeuwes, J.; Kramer, A.F.; Atchley, P.

    1999-01-01

    Most accounts of visual perception hold that the detection of primitive features occurs preattentively, in parallel across the visual field. Evidence that preattentive vision operates without attentional limitations comes from visual search tasks in which the detection of the presence or absence of

  8. Cascaded ensemble of convolutional neural networks and handcrafted features for mitosis detection

    Science.gov (United States)

    Wang, Haibo; Cruz-Roa, Angel; Basavanhally, Ajay; Gilmore, Hannah; Shih, Natalie; Feldman, Mike; Tomaszewski, John; Gonzalez, Fabio; Madabhushi, Anant

    2014-03-01

    Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is mitotic count, which involves quantifying the number of cells in the process of dividing (i.e. undergoing mitosis) at a specific point in time. Currently mitosis counting is done manually by a pathologist looking at multiple high power fields on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical or textural attributes of mitoses or features learned with convolutional neural networks (CNN). While handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely unsupervised feature generation methods, there is an appeal to attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. In this paper, we present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing performance by

  9. Camouflaged target detection based on polarized spectral features

    Science.gov (United States)

    Tan, Jian; Zhang, Junping; Zou, Bin

    2016-05-01

    The polarized hyperspectral images (PHSI) include polarization, spectral, spatial and radiant features, which provide more information about objects and scenes than traditional intensity or spectrum ones. And polarization can suppress the background and highlight the object, leading to the high potential to improve camouflaged target detection. So polarized hyperspectral imaging technique has aroused extensive concern in the last few years. Nowadays, the detection methods are still not very mature, most of which are rooted in the detection of hyperspectral image. And before using these algorithms, Stokes vector is used to process the original four-dimensional polarized hyperspectral data firstly. However, when the data is large and complex, the amount of calculation and error will increase. In this paper, tensor is applied to reconstruct the original four-dimensional data into new three-dimensional data, then, the constraint energy minimization (CEM) is used to process the new data, which adds the polarization information to construct the polarized spectral filter operator and takes full advantages of spectral and polarized information. This way deals with the original data without extracting the Stokes vector, so as to reduce the computation and error greatly. The experimental results also show that the proposed method in this paper is more suitable for the target detection of the PHSI.

  10. Computerized detection of diffuse lung disease in MDCT: the usefulness of statistical texture features

    International Nuclear Information System (INIS)

    Wang Jiahui; Li Qiang; Li Feng; Doi Kunio

    2009-01-01

    Accurate detection of diffuse lung disease is an important step for computerized diagnosis and quantification of this disease. It is also a difficult clinical task for radiologists. We developed a computerized scheme to assist radiologists in the detection of diffuse lung disease in multi-detector computed tomography (CT). Two radiologists selected 31 normal and 37 abnormal CT scans with ground glass opacity, reticular, honeycombing and nodular disease patterns based on clinical reports. The abnormal cases in our database must contain at least an abnormal area with a severity of moderate or severe level that was subjectively rated by the radiologists. Because statistical texture features may lack the power to distinguish a nodular pattern from a normal pattern, the abnormal cases that contain only a nodular pattern were excluded. The areas that included specific abnormal patterns in the selected CT images were then delineated as reference standards by an expert chest radiologist. The lungs were first segmented in each slice by use of a thresholding technique, and then divided into contiguous volumes of interest (VOIs) with a 64 x 64 x 64 matrix size. For each VOI, we determined and employed statistical texture features, such as run-length and co-occurrence matrix features, to distinguish abnormal from normal lung parenchyma. In particular, we developed new run-length texture features with clear physical meanings to considerably improve the accuracy of our detection scheme. A quadratic classifier was employed for distinguishing between normal and abnormal VOIs by the use of a leave-one-case-out validation scheme. A rule-based criterion was employed to further determine whether a case was normal or abnormal. We investigated the impact of new and conventional texture features, VOI size and the dimensionality for regions of interest on detecting diffuse lung disease. When we employed new texture features for 3D VOIs of 64 x 64 x 64 voxels, our system achieved the

  11. Bilateral symmetry detection on the basis of Scale Invariant Feature Transform.

    Directory of Open Access Journals (Sweden)

    Habib Akbar

    Full Text Available The automatic detection of bilateral symmetry is a challenging task in computer vision and pattern recognition. This paper presents an approach for the detection of bilateral symmetry in digital single object images. Our method relies on the extraction of Scale Invariant Feature Transform (SIFT based feature points, which serves as the basis for the ascertainment of the centroid of the object; the latter being the origin under the Cartesian coordinate system to be converted to the polar coordinate system in order to facilitate the selection symmetric coordinate pairs. This is followed by comparing the gradient magnitude and orientation of the corresponding points to evaluate the amount of symmetry exhibited by each pair of points. The experimental results show that our approach draw the symmetry line accurately, provided that the observed centroid point is true.

  12. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    Science.gov (United States)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  13. Qualitative simulation of bathymetric changes due to reservoir sedimentation: A Japanese case study.

    Directory of Open Access Journals (Sweden)

    Ahmed Bilal

    Full Text Available Sediment-dynamics modeling is a useful tool for estimating a dam's lifespan and its cost-benefit analysis. Collecting real data for sediment-dynamics analysis from conventional field survey methods is both tedious and expensive. Therefore, for most rivers, the historical record of data is either missing or not very detailed. Available data and existing tools have much potential and may be used for qualitative prediction of future bathymetric change trend. This study shows that proxy approaches may be used to increase the spatiotemporal resolution of flow data, and hypothesize the river cross-sections and sediment data. Sediment-dynamics analysis of the reach of the Tenryu River upstream of Sakuma Dam in Japan was performed to predict its future bathymetric changes using a 1D numerical model (HEC-RAS. In this case study, only annually-averaged flow data and the river's longitudinal bed profile at 5-year intervals were available. Therefore, the other required data, including river cross-section and geometry and sediment inflow grain sizes, had to be hypothesized or assimilated indirectly. The model yielded a good qualitative agreement, with an R2 (coefficient of determination of 0.8 for the observed and simulated bed profiles. A predictive simulation demonstrated that the useful life of the dam would end after the year 2035 (±5 years, which is in conformity with initial detailed estimates. The study indicates that a sediment-dynamic analysis can be performed even with a limited amount of data. However, such studies may only assess the qualitative trends of sediment dynamics.

  14. Qualitative simulation of bathymetric changes due to reservoir sedimentation: A Japanese case study.

    Science.gov (United States)

    Bilal, Ahmed; Dai, Wenhong; Larson, Magnus; Beebo, Qaid Naamo; Xie, Qiancheng

    2017-01-01

    Sediment-dynamics modeling is a useful tool for estimating a dam's lifespan and its cost-benefit analysis. Collecting real data for sediment-dynamics analysis from conventional field survey methods is both tedious and expensive. Therefore, for most rivers, the historical record of data is either missing or not very detailed. Available data and existing tools have much potential and may be used for qualitative prediction of future bathymetric change trend. This study shows that proxy approaches may be used to increase the spatiotemporal resolution of flow data, and hypothesize the river cross-sections and sediment data. Sediment-dynamics analysis of the reach of the Tenryu River upstream of Sakuma Dam in Japan was performed to predict its future bathymetric changes using a 1D numerical model (HEC-RAS). In this case study, only annually-averaged flow data and the river's longitudinal bed profile at 5-year intervals were available. Therefore, the other required data, including river cross-section and geometry and sediment inflow grain sizes, had to be hypothesized or assimilated indirectly. The model yielded a good qualitative agreement, with an R2 (coefficient of determination) of 0.8 for the observed and simulated bed profiles. A predictive simulation demonstrated that the useful life of the dam would end after the year 2035 (±5 years), which is in conformity with initial detailed estimates. The study indicates that a sediment-dynamic analysis can be performed even with a limited amount of data. However, such studies may only assess the qualitative trends of sediment dynamics.

  15. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    Science.gov (United States)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  16. Towards real-time detection and tracking of spatio-temporal features: Blob-filaments in fusion plasma

    International Nuclear Information System (INIS)

    Wu, Lingfei; Wu, Kesheng; Sim, Alex; Churchill, Michael; Choi, Jong Youl

    2016-01-01

    A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. Here, on a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.

  17. Logic based feature detection on incore neutron spectra

    International Nuclear Information System (INIS)

    Bende-Farkas, S.; Kiss, S.; Racz, A.

    1992-09-01

    A methodology is proposed to investigate neutron spectra in such a way which is similar to human thinking. The goal was to save experts from tedious, mechanical tasks of browsing a large amount of signals in order to recognize changes in the underlying mechanisms. The general framework for detecting features of incore neutron spectra with a rulebased methodology is presented. As an example, the meaningful peaks in the APSDs are determined. This method is a part of a wider project to develop a noise diagnostic expert system. (R.P.) 6 refs.; 6 figs.; 1 tab

  18. THE EFFECT OF IMAGE ENHANCEMENT METHODS DURING FEATURE DETECTION AND MATCHING OF THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    O. Akcay

    2017-05-01

    Full Text Available A successful image matching is essential to provide an automatic photogrammetric process accurately. Feature detection, extraction and matching algorithms have performed on the high resolution images perfectly. However, images of cameras, which are equipped with low-resolution thermal sensors are problematic with the current algorithms. In this paper, some digital image processing techniques were applied to the low-resolution images taken with Optris PI 450 382 x 288 pixel optical resolution lightweight thermal camera to increase extraction and matching performance. Image enhancement methods that adjust low quality digital thermal images, were used to produce more suitable images for detection and extraction. Three main digital image process techniques: histogram equalization, high pass and low pass filters were considered to increase the signal-to-noise ratio, sharpen image, remove noise, respectively. Later on, the pre-processed images were evaluated using current image detection and feature extraction methods Maximally Stable Extremal Regions (MSER and Speeded Up Robust Features (SURF algorithms. Obtained results showed that some enhancement methods increased number of extracted features and decreased blunder errors during image matching. Consequently, the effects of different pre-process techniques were compared in the paper.

  19. NOAA TIFF Image - 4m Bathymetric Principal Component Analysis (PCA) of Red Snapper Research Areas in the South Atlantic Bight, 2010

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains unified Bathymetric PCA GeoTiffs with 4x4 meter cell resolution describing the topography of 15 areas along the shelf edge off the South...

  20. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  1. GANN: Genetic algorithm neural networks for the detection of conserved combinations of features in DNA

    Directory of Open Access Journals (Sweden)

    Beiko Robert G

    2005-02-01

    Full Text Available Abstract Background The multitude of motif detection algorithms developed to date have largely focused on the detection of patterns in primary sequence. Since sequence-dependent DNA structure and flexibility may also play a role in protein-DNA interactions, the simultaneous exploration of sequence- and structure-based hypotheses about the composition of binding sites and the ordering of features in a regulatory region should be considered as well. The consideration of structural features requires the development of new detection tools that can deal with data types other than primary sequence. Results GANN (available at http://bioinformatics.org.au/gann is a machine learning tool for the detection of conserved features in DNA. The software suite contains programs to extract different regions of genomic DNA from flat files and convert these sequences to indices that reflect sequence and structural composition or the presence of specific protein binding sites. The machine learning component allows the classification of different types of sequences based on subsamples of these indices, and can identify the best combinations of indices and machine learning architecture for sequence discrimination. Another key feature of GANN is the replicated splitting of data into training and test sets, and the implementation of negative controls. In validation experiments, GANN successfully merged important sequence and structural features to yield good predictive models for synthetic and real regulatory regions. Conclusion GANN is a flexible tool that can search through large sets of sequence and structural feature combinations to identify those that best characterize a set of sequences.

  2. Max-AUC feature selection in computer-aided detection of polyps in CT colonography.

    Science.gov (United States)

    Xu, Jian-Wu; Suzuki, Kenji

    2014-03-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level.

  3. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Science.gov (United States)

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  4. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Sungho Kim

    2016-07-01

    Full Text Available Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR images or infrared (IR images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter and an asymmetric morphological closing filter (AMCF, post-filter into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic

  5. Minimal Data Fidelity for Successful detection of Stellar Features or Companions

    Science.gov (United States)

    Agarwal, S.; Wettlaufer, J. S.

    2017-12-01

    Technological advances in instrumentation have led to an exponential increase in exoplanet detection and scrutiny of stellar features such as spots and faculae. While the spots and faculae enable us to understand the stellar dynamics, exoplanets provide us with a glimpse into stellar evolution. While a clean set of data is always desirable, noise is ubiquitous in the data such as telluric, instrumental, or photonic, but combining this with increased spectrographic resolution compounds technological challenges. To account for these noise sources and resolution issues, using a temporal multifractal framework, we study data from the SOAP 2.0 tool, which simulates a stellar spectrum in the presence of a spot, a facula or a planet. Given these clean simulations, we vary the resolution as well as the signal-to- noise (S/N) ratio to obtain a lower limit on the resolution and S/N required to robustly detect features. We show that a spot and facula with a 1% coverage of the stellar disk can be robustly detected for a S/N (per resolution element) of 20 and 35 respectively for any resolution above 20,000, while a planet with an RV of 10ms-1 can be detected for a S/N (per resolution element) of 350. Rather than viewing noise as an impediment, this approach uses noise as a source of information.

  6. Driver Fatigue Detection System Using Electroencephalography Signals Based on Combined Entropy Features

    Directory of Open Access Journals (Sweden)

    Zhendong Mu

    2017-02-01

    Full Text Available Driver fatigue has become one of the major causes of traffic accidents, and is a complicated physiological process. However, there is no effective method to detect driving fatigue. Electroencephalography (EEG signals are complex, unstable, and non-linear; non-linear analysis methods, such as entropy, maybe more appropriate. This study evaluates a combined entropy-based processing method of EEG data to detect driver fatigue. In this paper, 12 subjects were selected to take part in an experiment, obeying driving training in a virtual environment under the instruction of the operator. Four types of enthrones (spectrum entropy, approximate entropy, sample entropy and fuzzy entropy were used to extract features for the purpose of driver fatigue detection. Electrode selection process and a support vector machine (SVM classification algorithm were also proposed. The average recognition accuracy was 98.75%. Retrospective analysis of the EEG showed that the extracted features from electrodes T5, TP7, TP8 and FP1 may yield better performance. SVM classification algorithm using radial basis function as kernel function obtained better results. A combined entropy-based method demonstrates good classification performance for studying driver fatigue detection.

  7. Optical detection of random features for high security applications

    Science.gov (United States)

    Haist, T.; Tiziani, H. J.

    1998-02-01

    Optical detection of random features in combination with digital signatures based on public key codes in order to recognize counterfeit objects will be discussed. Without applying expensive production techniques objects are protected against counterfeiting. Verification is done off-line by optical means without a central authority. The method is applied for protecting banknotes. Experimental results for this application are presented. The method is also applicable for identity verification of a credit- or chip-card holder.

  8. Behavioral features recognition and oestrus detection based on fast approximate clustering algorithm in dairy cows

    Science.gov (United States)

    Tian, Fuyang; Cao, Dong; Dong, Xiaoning; Zhao, Xinqiang; Li, Fade; Wang, Zhonghua

    2017-06-01

    Behavioral features recognition was an important effect to detect oestrus and sickness in dairy herds and there is a need for heat detection aid. The detection method was based on the measure of the individual behavioural activity, standing time, and temperature of dairy using vibrational sensor and temperature sensor in this paper. The data of behavioural activity index, standing time, lying time and walking time were sent to computer by lower power consumption wireless communication system. The fast approximate K-means algorithm (FAKM) was proposed to deal the data of the sensor for behavioral features recognition. As a result of technical progress in monitoring cows using computers, automatic oestrus detection has become possible.

  9. Comparing experts and novices in Martian surface feature change detection and identification

    Science.gov (United States)

    Wardlaw, Jessica; Sprinks, James; Houghton, Robert; Muller, Jan-Peter; Sidiropoulos, Panagiotis; Bamford, Steven; Marsh, Stuart

    2018-02-01

    Change detection in satellite images is a key concern of the Earth Observation field for environmental and climate change monitoring. Satellite images also provide important clues to both the past and present surface conditions of other planets, which cannot be validated on the ground. With the volume of satellite imagery continuing to grow, the inadequacy of computerised solutions to manage and process imagery to the required professional standard is of critical concern. Whilst studies find the crowd sourcing approach suitable for the counting of impact craters in single images, images of higher resolution contain a much wider range of features, and the performance of novices in identifying more complex features and detecting change, remains unknown. This paper presents a first step towards understanding whether novices can identify and annotate changes in different geomorphological features. A website was developed to enable visitors to flick between two images of the same location on Mars taken at different times and classify 1) if a surface feature changed and if so, 2) what feature had changed from a pre-defined list of six. Planetary scientists provided ;expert; data against which classifications made by novices could be compared when the project subsequently went public. Whilst no significant difference was found in images identified with surface changes by expert and novices, results exhibited differences in consensus within and between experts and novices when asked to classify the type of change. Experts demonstrated higher levels of agreement in classification of changes as dust devil tracks, slope streaks and impact craters than other features, whilst the consensus of novices was consistent across feature types; furthermore, the level of consensus amongst regardless of feature type. These trends are secondary to the low levels of consensus found, regardless of feature type or classifier expertise. These findings demand the attention of researchers who

  10. xMSanalyzer: automated pipeline for improved feature detection and downstream analysis of large-scale, non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Uppal Karan

    2013-01-01

    Full Text Available Abstract Background Detection of low abundance metabolites is important for de novo mapping of metabolic pathways related to diet, microbiome or environmental exposures. Multiple algorithms are available to extract m/z features from liquid chromatography-mass spectral data in a conservative manner, which tends to preclude detection of low abundance chemicals and chemicals found in small subsets of samples. The present study provides software to enhance such algorithms for feature detection, quality assessment, and annotation. Results xMSanalyzer is a set of utilities for automated processing of metabolomics data. The utilites can be classified into four main modules to: 1 improve feature detection for replicate analyses by systematic re-extraction with multiple parameter settings and data merger to optimize the balance between sensitivity and reliability, 2 evaluate sample quality and feature consistency, 3 detect feature overlap between datasets, and 4 characterize high-resolution m/z matches to small molecule metabolites and biological pathways using multiple chemical databases. The package was tested with plasma samples and shown to more than double the number of features extracted while improving quantitative reliability of detection. MS/MS analysis of a random subset of peaks that were exclusively detected using xMSanalyzer confirmed that the optimization scheme improves detection of real metabolites. Conclusions xMSanalyzer is a package of utilities for data extraction, quality control assessment, detection of overlapping and unique metabolites in multiple datasets, and batch annotation of metabolites. The program was designed to integrate with existing packages such as apLCMS and XCMS, but the framework can also be used to enhance data extraction for other LC/MS data software.

  11. Towards Stable Adversarial Feature Learning for LiDAR based Loop Closure Detection

    OpenAIRE

    Xu, Lingyun; Yin, Peng; Luo, Haibo; Liu, Yunhui; Han, Jianda

    2017-01-01

    Stable feature extraction is the key for the Loop closure detection (LCD) task in the simultaneously localization and mapping (SLAM) framework. In our paper, the feature extraction is operated by using a generative adversarial networks (GANs) based unsupervised learning. GANs are powerful generative models, however, GANs based adversarial learning suffers from training instability. We find that the data-code joint distribution in the adversarial learning is a more complex manifold than in the...

  12. FEATURE RECOGNITION BERBASIS CORNER DETECTION DENGAN METODE FAST, SURF DAN FLANN TREE UNTUK IDENTIFIKASI LOGO PADA AUGMENTED REALITY MOBILE SYSTEM

    Directory of Open Access Journals (Sweden)

    Rastri Prathivi

    2014-01-01

    Full Text Available Logo is a graphical symbol that is the identity of an organization, institution, or company. Logo is generally used to introduce to the public the existence of an organization, institution, or company. Through the existence of an agency logo can be seen by the public. Feature recognition is one of the processes that exist within an augmented reality system. One of uses augmented reality is able to recognize the identity of the logo through a camera.The first step to make a process of feature recognition is through the corner detection. Incorporation of several method such as FAST, SURF, and FLANN TREE for the feature detection process based corner detection feature matching up process, will have the better ability to detect the presence of a logo. Additionally when running the feature extraction process there are several issues that arise as scale invariant feature and rotation invariant feature. In this study the research object in the form of logo to the priority to make the process of feature recognition. FAST, SURF, and FLANN TREE method will detection logo with scale invariant feature and rotation invariant feature conditions. Obtained from this study will demonstration the accuracy from FAST, SURF, and FLANN TREE methods to solve the scale invariant and rotation invariant feature problems.

  13. Feature Optimize and Classification of EEG Signals: Application to Lie Detection Using KPCA and ELM

    Directory of Open Access Journals (Sweden)

    GAO Junfeng

    2014-04-01

    Full Text Available EEG signals had been widely used to detect liars recent years. To overcome the shortcomings of current signals processing, kernel principal component analysis (KPCA and extreme learning machine (ELM was combined to detect liars. We recorded the EEG signals at Pz from 30 randomly divided guilty and innocent subjects. Each five Probe responses were averaged within subject and then extracted wavelet features. KPCA was employed to select feature subset with deduced dimensions based on initial wavelet features, which was fed into ELM. To date, there is no perfect solution for the number of its hidden nodes (NHN. We used grid searching algorithm to select simultaneously the optimal values of the dimension of feature subset and NHN based on cross- validation method. The best classification mode was decided with the optimal searching values. Experimental results show that for EEG signals from the experiment of lie detection, KPCA_ELM has higher classification accuracy with faster training speed than other widely-used classification modes, which is especially suitable for online EEG signals processing system.

  14. Hybrid image representation learning model with invariant features for basal cell carcinoma detection

    Science.gov (United States)

    Arevalo, John; Cruz-Roa, Angel; González, Fabio A.

    2013-11-01

    This paper presents a novel method for basal-cell carcinoma detection, which combines state-of-the-art methods for unsupervised feature learning (UFL) and bag of features (BOF) representation. BOF, which is a form of representation learning, has shown a good performance in automatic histopathology image classi cation. In BOF, patches are usually represented using descriptors such as SIFT and DCT. We propose to use UFL to learn the patch representation itself. This is accomplished by applying a topographic UFL method (T-RICA), which automatically learns visual invariance properties of color, scale and rotation from an image collection. These learned features also reveals these visual properties associated to cancerous and healthy tissues and improves carcinoma detection results by 7% with respect to traditional autoencoders, and 6% with respect to standard DCT representations obtaining in average 92% in terms of F-score and 93% of balanced accuracy.

  15. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  16. Automated Feature and Event Detection with SDO AIA and HMI Data

    Science.gov (United States)

    Davey, Alisdair; Martens, P. C. H.; Attrill, G. D. R.; Engell, A.; Farid, S.; Grigis, P. C.; Kasper, J.; Korreck, K.; Saar, S. H.; Su, Y.; Testa, P.; Wills-Davey, M.; Savcheva, A.; Bernasconi, P. N.; Raouafi, N.-E.; Delouille, V. A.; Hochedez, J. F..; Cirtain, J. W.; Deforest, C. E.; Angryk, R. A.; de Moortel, I.; Wiegelmann, T.; Georgouli, M. K.; McAteer, R. T. J.; Hurlburt, N.; Timmons, R.

    The Solar Dynamics Observatory (SDO) represents a new frontier in quantity and quality of solar data. At about 1.5 TB/day, the data will not be easily digestible by solar physicists using the same methods that have been employed for images from previous missions. In order for solar scientists to use the SDO data effectively they need meta-data that will allow them to identify and retrieve data sets that address their particular science questions. We are building a comprehensive computer vision pipeline for SDO, abstracting complete metadata on many of the features and events detectable on the Sun without human intervention. Our project unites more than a dozen individual, existing codes into a systematic tool that can be used by the entire solar community. The feature finding codes will run as part of the SDO Event Detection System (EDS) at the Joint Science Operations Center (JSOC; joint between Stanford and LMSAL). The metadata produced will be stored in the Heliophysics Event Knowledgebase (HEK), which will be accessible on-line for the rest of the world directly or via the Virtual Solar Observatory (VSO) . Solar scientists will be able to use the HEK to select event and feature data to download for science studies.

  17. Implementation of a FPGA-Based Feature Detection and Networking System for Real-time Traffic Monitoring

    OpenAIRE

    Chen, Jieshi; Schafer, Benjamin Carrion; Ho, Ivan Wang-Hei

    2016-01-01

    With the growing demand of real-time traffic monitoring nowadays, software-based image processing can hardly meet the real-time data processing requirement due to the serial data processing nature. In this paper, the implementation of a hardware-based feature detection and networking system prototype for real-time traffic monitoring as well as data transmission is presented. The hardware architecture of the proposed system is mainly composed of three parts: data collection, feature detection,...

  18. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    OpenAIRE

    Bouma, H.; Baan, J.; Burghouts, G.J.; Eendebak, P.T.; Huis, J.R. van; Dijk, J.; Rest, J.H.C. van

    2014-01-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature computation and pickpocket recognition. This is challenging because the environment is crowded, people move freely through areas which cannot be covered by a single camera, because the actual snat...

  19. Bathymetric and velocimetric surveys at highway bridges crossing the Missouri River between Kansas City and St. Louis, Missouri, April-May, 2013

    Science.gov (United States)

    Huizinga, Richard J.

    2014-01-01

    Bathymetric and velocimetric data were collected by the U.S. Geological Survey, in cooperation with the Missouri Department of Transportation, in the vicinity of 10 bridges at 9 highway crossings of the Missouri River between Lexington and Washington, Missouri, from April 22 through May 2, 2013. A multibeam echosounder mapping system was used to obtain channel-bed elevations for river reaches ranging from 1,640 to 1,840 feet longitudinally and extending laterally across the active channel between banks and spur dikes in the Missouri River during low- to moderate-flow conditions. These bathymetric surveys indicate the channel conditions at the time of the surveys and provide characteristics of scour holes that may be useful in the development of predictive guidelines or equations for scour holes. These data also may be useful to the Missouri Department of Transportation to assess the bridges for stability and integrity issues with respect to bridge scour during floods. Bathymetric data were collected around every pier that was in water, except those at the edge of water or in very shallow water (less than about 6 feet). Scour holes were present at most piers for which bathymetry could be obtained, except at piers on channel banks, near or embedded in lateral or longitudinal spur dikes, and on exposed bedrock outcrops. Scour holes observed at the surveyed bridges were examined with respect to depth and shape. Although exposure of parts of foundational support elements was observed at several piers, at most sites the exposure likely can be considered minimal compared to the overall substructure that remains buried in channel-bed material; however, there were several notable exceptions where the bed material thickness between the bottom of the scour hole and bedrock was less than 6 feet. Such substantial exposure of usually buried substructural elements may warrant special observation in future flood events. Previous bathymetric surveys had been done at all of the

  20. Epileptic MEG Spike Detection Using Statistical Features and Genetic Programming with KNN

    Directory of Open Access Journals (Sweden)

    Turky N. Alotaiby

    2017-01-01

    Full Text Available Epilepsy is a neurological disorder that affects millions of people worldwide. Monitoring the brain activities and identifying the seizure source which starts with spike detection are important steps for epilepsy treatment. Magnetoencephalography (MEG is an emerging epileptic diagnostic tool with high-density sensors; this makes manual analysis a challenging task due to the vast amount of MEG data. This paper explores the use of eight statistical features and genetic programing (GP with the K-nearest neighbor (KNN for interictal spike detection. The proposed method is comprised of three stages: preprocessing, genetic programming-based feature generation, and classification. The effectiveness of the proposed approach has been evaluated using real MEG data obtained from 28 epileptic patients. It has achieved a 91.75% average sensitivity and 92.99% average specificity.

  1. System for Detecting Vehicle Features from Low Quality Data

    Directory of Open Access Journals (Sweden)

    Marcin Dominik Bugdol

    2018-02-01

    Full Text Available The paper presents a system that recognizes the make, colour and type of the vehicle. The classification has been performed using low quality data from real-traffic measurement devices. For detecting vehicles’ specific features three methods have been developed. They employ several image and signal recognition techniques, e.g. Mamdani Fuzzy Inference System for colour recognition or Scale Invariant Features Transform for make identification. The obtained results are very promising, especially because only on-site equipment, not dedicated for such application, has been employed. In case of car type, the proposed system has better performance than commonly used inductive loops. Extensive information about the vehicle can be used in many fields of Intelligent Transport Systems, especially for traffic supervision.

  2. Bathymetric and velocimetric surveys at highway bridges crossing the Missouri River near Kansas City, Missouri, June 2–4, 2015

    Science.gov (United States)

    Huizinga, Richard J.

    2016-06-22

    Bathymetric and velocimetric data were collected by the U.S. Geological Survey, in cooperation with the Missouri Department of Transportation, near 8 bridges at 7 highway crossings of the Missouri River in Kansas City, Missouri, from June 2 to 4, 2015. A multibeam echosounder mapping system was used to obtain channel-bed elevations for river reaches ranging from 1,640 to 1,660 feet longitudinally and extending laterally across the active channel from bank to bank during low to moderate flood flow conditions. These bathymetric surveys indicate the channel conditions at the time of the surveys and provide characteristics of scour holes that may be useful in the development of predictive guidelines or equations for scour holes. These data also may be useful to the Missouri Department of Transportation as a low to moderate flood flow comparison to help assess the bridges for stability and integrity issues with respect to bridge scour during floods.

  3. Bathymetric and velocimetric surveys at highway bridges crossing the Missouri and Mississippi Rivers on the periphery of Missouri, June 2014

    Science.gov (United States)

    Huizinga, Richard J.

    2015-01-01

    Bathymetric and velocimetric data were collected by the U.S. Geological Survey, in cooperation with the Missouri Department of Transportation, in the vicinity of 8 bridges at 7 highway crossings of the Missouri and Mississippi Rivers on the periphery of Missouri from June 3 to 11, 2014. A multibeam echosounder mapping system was used to obtain channel-bed elevations for river reaches ranging from 1,525 to 1,640 feet longitudinally, and extending laterally across the active channel from bank to bank during low- to moderate-flow conditions. These bathymetric surveys indicate the channel conditions at the time of the surveys and provide characteristics of scour holes that may be useful in the development of predictive guidelines or equations for scour holes. These data also may be useful to the Missouri Department of Transportation as a low- to moderate-flow comparison to help assess the bridges for stability and integrity issues with respect to bridge scour during floods.

  4. Robust and fast license plate detection based on the fusion of color and edge feature

    Science.gov (United States)

    Cai, De; Shi, Zhonghan; Liu, Jin; Hu, Chuanping; Mei, Lin; Qi, Li

    2014-11-01

    Extracting a license plate is an important stage in automatic vehicle identification. The degradation of images and the computation intense make this task difficult. In this paper, a robust and fast license plate detection based on the fusion of color and edge feature is proposed. Based on the dichromatic reflection model, two new color ratios computed from the RGB color model are introduced and proved to be two color invariants. The global color feature extracted by the new color invariants improves the method's robustness. The local Sobel edge feature guarantees the method's accuracy. In the experiment, the detection performance is good. The detection results show that this paper's method is robust to the illumination, object geometry and the disturbance around the license plates. The method can also detect license plates when the color of the car body is the same as the color of the plates. The processing time for image size of 1000x1000 by pixels is nearly 0.2s. Based on the comparison, the performance of the new ratios is comparable to the common used HSI color model.

  5. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  6. Fast detection of vascular plaque in optical coherence tomography images using a reduced feature set

    Science.gov (United States)

    Prakash, Ammu; Ocana Macias, Mariano; Hewko, Mark; Sowa, Michael; Sherif, Sherif

    2018-03-01

    Optical coherence tomography (OCT) images are capable of detecting vascular plaque by using the full set of 26 Haralick textural features and a standard K-means clustering algorithm. However, the use of the full set of 26 textural features is computationally expensive and may not be feasible for real time implementation. In this work, we identified a reduced set of 3 textural feature which characterizes vascular plaque and used a generalized Fuzzy C-means clustering algorithm. Our work involves three steps: 1) the reduction of a full set 26 textural feature to a reduced set of 3 textural features by using genetic algorithm (GA) optimization method 2) the implementation of an unsupervised generalized clustering algorithm (Fuzzy C-means) on the reduced feature space, and 3) the validation of our results using histology and actual photographic images of vascular plaque. Our results show an excellent match with histology and actual photographic images of vascular tissue. Therefore, our results could provide an efficient pre-clinical tool for the detection of vascular plaque in real time OCT imaging.

  7. Detection and analysis of diamond fingerprinting feature and its application

    Energy Technology Data Exchange (ETDEWEB)

    Li Xin; Huang Guoliang; Li Qiang; Chen Shengyi, E-mail: tshgl@tsinghua.edu.cn [Department of Biomedical Engineering, the School of Medicine, Tsinghua University, Beijing, 100084 (China)

    2011-01-01

    Before becoming a jewelry diamonds need to be carved artistically with some special geometric features as the structure of the polyhedron. There are subtle differences in the structure of this polyhedron in each diamond. With the spatial frequency spectrum analysis of diamond surface structure, we can obtain the diamond fingerprint information which represents the 'Diamond ID' and has good specificity. Based on the optical Fourier Transform spatial spectrum analysis, the fingerprinting identification of surface structure of diamond in spatial frequency domain was studied in this paper. We constructed both the completely coherent diamond fingerprinting detection system illuminated by laser and the partially coherent diamond fingerprinting detection system illuminated by led, and analyzed the effect of the coherence of light source to the diamond fingerprinting feature. We studied rotation invariance and translation invariance of the diamond fingerprinting and verified the feasibility of real-time and accurate identification of diamond fingerprint. With the profit of this work, we can provide customs, jewelers and consumers with a real-time and reliable diamonds identification instrument, which will curb diamond smuggling, theft and other crimes, and ensure the healthy development of the diamond industry.

  8. Evaluation of LiDAR-acquired bathymetric and topographic data accuracy in various hydrogeomorphic settings in the Deadwood and South Fork Boise Rivers, West-Central Idaho, 2007

    Science.gov (United States)

    Skinner, Kenneth D.

    2011-01-01

    High-quality elevation data in riverine environments are important for fisheries management applications and the accuracy of such data needs to be determined for its proper application. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging)-or EAARL-system was used to obtain topographic and bathymetric data along the Deadwood and South Fork Boise Rivers in west-central Idaho. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL surveys, real-time kinematic global positioning system surveys were made in three areas along each of the rivers to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived raster elevation values, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.134 to 0.347 m. Accuracies in the elevation values for the stream hydrogeomorphic settings had root mean square errors ranging from 0.251 to 0.782 m. The greater root mean square errors for the latter data are the result of complex hydrogeomorphic environments within the streams, such as submerged aquatic macrophytes and air bubble entrainment; and those along the banks, such as boulders, woody debris, and steep slopes. These complex environments reduce the accuracy of EAARL bathymetric and topographic measurements. Steep banks emphasize the horizontal location discrepancies between the EAARL and ground-survey data and may not be good representations of vertical accuracy. The EAARL point to ground-survey comparisons produced results with slightly higher but similar root mean square errors than those for the EAARL raster to ground-survey comparisons, emphasizing the minimized horizontal offset by using interpolated values from the raster dataset at the exact location of the ground-survey point as opposed to an actual EAARL point within a 1-meter distance. The

  9. Spatial-temporal features of thermal images for Carpal Tunnel Syndrome detection

    Science.gov (United States)

    Estupinan Roldan, Kevin; Ortega Piedrahita, Marco A.; Benitez, Hernan D.

    2014-02-01

    Disorders associated with repeated trauma account for about 60% of all occupational illnesses, Carpal Tunnel Syndrome (CTS) being the most consulted today. Infrared Thermography (IT) has come to play an important role in the field of medicine. IT is non-invasive and detects diseases based on measuring temperature variations. IT represents a possible alternative to prevalent methods for diagnosis of CTS (i.e. nerve conduction studies and electromiography). This work presents a set of spatial-temporal features extracted from thermal images taken in healthy and ill patients. Support Vector Machine (SVM) classifiers test this feature space with Leave One Out (LOO) validation error. The results of the proposed approach show linear separability and lower validation errors when compared to features used in previous works that do not account for temperature spatial variability.

  10. Asymmetry features for classification of thermograms in breast cancer detection

    Science.gov (United States)

    Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz

    2016-09-01

    The computer system for an automatic interpretation of thermographic pictures created by the Br-aster devices uses image processing and machine learning algorithms. The huge set of attributes analyzed by this software includes the asymmetry measurements between corresponding images, and these features are analyzed in presented paper. The system was tested on real data and achieves accuracy comparable to other popular techniques used for breast tumour detection.

  11. Surveying alignment-free features for Ortholog detection in related yeast proteomes by using supervised big data classifiers.

    Science.gov (United States)

    Galpert, Deborah; Fernández, Alberto; Herrera, Francisco; Antunes, Agostinho; Molina-Ruiz, Reinaldo; Agüero-Chapin, Guillermin

    2018-05-03

    The development of new ortholog detection algorithms and the improvement of existing ones are of major importance in functional genomics. We have previously introduced a successful supervised pairwise ortholog classification approach implemented in a big data platform that considered several pairwise protein features and the low ortholog pair ratios found between two annotated proteomes (Galpert, D et al., BioMed Research International, 2015). The supervised models were built and tested using a Saccharomycete yeast benchmark dataset proposed by Salichos and Rokas (2011). Despite several pairwise protein features being combined in a supervised big data approach; they all, to some extent were alignment-based features and the proposed algorithms were evaluated on a unique test set. Here, we aim to evaluate the impact of alignment-free features on the performance of supervised models implemented in the Spark big data platform for pairwise ortholog detection in several related yeast proteomes. The Spark Random Forest and Decision Trees with oversampling and undersampling techniques, and built with only alignment-based similarity measures or combined with several alignment-free pairwise protein features showed the highest classification performance for ortholog detection in three yeast proteome pairs. Although such supervised approaches outperformed traditional methods, there were no significant differences between the exclusive use of alignment-based similarity measures and their combination with alignment-free features, even within the twilight zone of the studied proteomes. Just when alignment-based and alignment-free features were combined in Spark Decision Trees with imbalance management, a higher success rate (98.71%) within the twilight zone could be achieved for a yeast proteome pair that underwent a whole genome duplication. The feature selection study showed that alignment-based features were top-ranked for the best classifiers while the runners-up were

  12. Improving features used for hyper-temporal land cover change detection by reducing the uncertainty in the feature extraction method

    CSIR Research Space (South Africa)

    Salmon, BP

    2017-07-01

    Full Text Available the effect which the length of a temporal sliding window has on the success of detecting land cover change. It is shown using a short Fourier transform as a feature extraction method provides meaningful robust input to a machine learning method. In theory...

  13. Improved detection and mapping of deepwater hydrocarbon seeps: optimizing multibeam echosounder seafloor backscatter acquisition and processing techniques

    Science.gov (United States)

    Mitchell, Garrett A.; Orange, Daniel L.; Gharib, Jamshid J.; Kennedy, Paul

    2018-02-01

    Marine seep hunting surveys are a current focus of hydrocarbon exploration surveys due to recent advances in offshore geophysical surveying, geochemical sampling, and analytical technologies. Hydrocarbon seeps are ephemeral, small, discrete, and therefore difficult to sample on the deep seafloor. Multibeam echosounders are an efficient seafloor exploration tool to remotely locate and map seep features. Geophysical signatures from hydrocarbon seeps are acoustically-evident in bathymetric, seafloor backscatter, midwater backscatter datasets. Interpretation of these signatures in backscatter datasets is a fundamental component of commercial seep hunting campaigns. Degradation of backscatter datasets resulting from environmental, geometric, and system noise can interfere with the detection and delineation of seeps. We present a relative backscatter intensity normalization method and an oversampling acquisition technique that can improve the geological resolvability of hydrocarbon seeps. We use Green Canyon (GC) Block 600 in the Northern Gulf of Mexico as a seep calibration site for a Kongsberg EM302 30 kHz MBES prior to the start of the Gigante seep hunting program to analyze these techniques. At GC600, we evaluate the results of a backscatter intensity normalization, assess the effectiveness of 2X seafloor coverage in resolving seep-related features in backscatter data, and determine the off-nadir detection limits of bubble plumes using the EM302. Incorporating these techniques into seep hunting surveys can improve the detectability and sampling of seafloor seeps.

  14. Improved detection and mapping of deepwater hydrocarbon seeps: optimizing multibeam echosounder seafloor backscatter acquisition and processing techniques

    Science.gov (United States)

    Mitchell, Garrett A.; Orange, Daniel L.; Gharib, Jamshid J.; Kennedy, Paul

    2018-06-01

    Marine seep hunting surveys are a current focus of hydrocarbon exploration surveys due to recent advances in offshore geophysical surveying, geochemical sampling, and analytical technologies. Hydrocarbon seeps are ephemeral, small, discrete, and therefore difficult to sample on the deep seafloor. Multibeam echosounders are an efficient seafloor exploration tool to remotely locate and map seep features. Geophysical signatures from hydrocarbon seeps are acoustically-evident in bathymetric, seafloor backscatter, midwater backscatter datasets. Interpretation of these signatures in backscatter datasets is a fundamental component of commercial seep hunting campaigns. Degradation of backscatter datasets resulting from environmental, geometric, and system noise can interfere with the detection and delineation of seeps. We present a relative backscatter intensity normalization method and an oversampling acquisition technique that can improve the geological resolvability of hydrocarbon seeps. We use Green Canyon (GC) Block 600 in the Northern Gulf of Mexico as a seep calibration site for a Kongsberg EM302 30 kHz MBES prior to the start of the Gigante seep hunting program to analyze these techniques. At GC600, we evaluate the results of a backscatter intensity normalization, assess the effectiveness of 2X seafloor coverage in resolving seep-related features in backscatter data, and determine the off-nadir detection limits of bubble plumes using the EM302. Incorporating these techniques into seep hunting surveys can improve the detectability and sampling of seafloor seeps.

  15. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  16. Incidental breast masses detected by computed tomography: are any imaging features predictive of malignancy?

    Energy Technology Data Exchange (ETDEWEB)

    Porter, G. [Primrose Breast Care Unit, Derriford Hospital, Plymouth (United Kingdom)], E-mail: Gareth.Porter@phnt.swest.nhs.uk; Steel, J.; Paisley, K.; Watkins, R. [Primrose Breast Care Unit, Derriford Hospital, Plymouth (United Kingdom); Holgate, C. [Department of Histopathology, Derriford Hospital, Plymouth (United Kingdom)

    2009-05-15

    Aim: To review the outcome of further assessment of breast abnormalities detected incidentally by multidetector computed tomography (MDCT) and to determine whether any MDCT imaging features were predictive of malignancy. Material and methods: The outcome of 34 patients referred to the Primrose Breast Care Unit with breast abnormalities detected incidentally using MDCT was prospectively recorded. Women with a known diagnosis of breast cancer were excluded. CT imaging features and histological diagnoses were recorded and the correlation assessed using Fisher's exact test. Results: Of the 34 referred patients a malignant diagnosis was noted in 11 (32%). There were 10 breast malignancies (seven invasive ductal carcinomas, one invasive lobular carcinoma, two metastatic lesions) and one axillary lymphoma. CT features suggestive of breast malignancy were spiculation [6/10 (60%) versus 0/24 (0%) p = 0.0002] and associated axillary lymphadenopathy [3/10 (33%) versus 0/20 (0%) p = 0.030]. Conversely, a well-defined mass was suggestive of benign disease [10/24 (42%) versus 0/10 (0%); p = 0.015]. Associated calcification, ill-definition, heterogeneity, size, and multiplicity of lesions were not useful discriminating CT features. There was a non-significant trend for lesions in involuted breasts to be more frequently malignant than in dense breasts [6/14 (43%) versus 4/20 (20%) p = 0.11]. Conclusion: In the present series there was a significant rate (32%) of malignancy in patients referred to the breast clinic with CT-detected incidental breast lesions. The CT features of spiculation or axillary lymphadenopathy are strongly suggestive of malignancy.

  17. Sleep Spindle Detection and Prediction Using a Mixture of Time Series and Chaotic Features

    Directory of Open Access Journals (Sweden)

    Amin Hekmatmanesh

    2017-01-01

    Full Text Available It is well established that sleep spindles (bursts of oscillatory brain electrical activity are significant indicators of learning, memory and some disease states. Therefore, many attempts have been made to detect these hallmark patterns automatically. In this pilot investigation, we paid special attention to nonlinear chaotic features of EEG signals (in combination with linear features to investigate the detection and prediction of sleep spindles. These nonlinear features included: Higuchi's, Katz's and Sevcik's Fractal Dimensions, as well as the Largest Lyapunov Exponent and Kolmogorov's Entropy. It was shown that the intensity map of various nonlinear features derived from the constructive interference of spindle signals could improve the detection of the sleep spindles. It was also observed that the prediction of sleep spindles could be facilitated by means of the analysis of these maps. Two well-known classifiers, namely the Multi-Layer Perceptron (MLP and the K-Nearest Neighbor (KNN were used to distinguish between spindle and non-spindle patterns. The MLP classifier produced a~high discriminative capacity (accuracy = 94.93%, sensitivity = 94.31% and specificity = 95.28% with significant robustness (accuracy ranging from 91.33% to 94.93%, sensitivity varying from 91.20% to 94.31%, and specificity extending from 89.79% to 95.28% in separating spindles from non-spindles. This classifier also generated the best results in predicting sleep spindles based on chaotic features. In addition, the MLP was used to find out the best time window for predicting the sleep spindles, with the experimental results reaching 97.96% accuracy.

  18. Change detection and change monitoring of natural and man-made features in multispectral and hyperspectral satellite imagery

    Science.gov (United States)

    Moody, Daniela Irina

    2018-04-17

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. A Hebbian learning rule may be used to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of pixel patches over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.

  19. Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment

    DEFF Research Database (Denmark)

    Andersen, Mikkel Skovgaard; Gergely, Áron; Al-Hamdani, Zyad K.

    2017-01-01

    of detecting features with a size of less than 1 m2. The derived high-resolution DEM was applied for detection and classification of geomorphometric and morphological features within the natural environment of the study area. Initially, the bathymetric position index (BPI) and the slope of the DEM were used...... area into six specific types of morphological features (i.e. subtidal channel, intertidal flat, intertidal creek, linear bar, swash bar and beach dune). The developed classification method is adapted and applied to a specific case, but it can also be implemented in other cases and environments....

  20. Salient region detection by fusing bottom-up and top-down features extracted from a single image.

    Science.gov (United States)

    Tian, Huawei; Fang, Yuming; Zhao, Yao; Lin, Weisi; Ni, Rongrong; Zhu, Zhenfeng

    2014-10-01

    Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.

  1. A multilevel-ROI-features-based machine learning method for detection of morphometric biomarkers in Parkinson's disease.

    Science.gov (United States)

    Peng, Bo; Wang, Suhong; Zhou, Zhiyong; Liu, Yan; Tong, Baotong; Zhang, Tao; Dai, Yakang

    2017-06-09

    Machine learning methods have been widely used in recent years for detection of neuroimaging biomarkers in regions of interest (ROIs) and assisting diagnosis of neurodegenerative diseases. The innovation of this study is to use multilevel-ROI-features-based machine learning method to detect sensitive morphometric biomarkers in Parkinson's disease (PD). Specifically, the low-level ROI features (gray matter volume, cortical thickness, etc.) and high-level correlative features (connectivity between ROIs) are integrated to construct the multilevel ROI features. Filter- and wrapper- based feature selection method and multi-kernel support vector machine (SVM) are used in the classification algorithm. T1-weighted brain magnetic resonance (MR) images of 69 PD patients and 103 normal controls from the Parkinson's Progression Markers Initiative (PPMI) dataset are included in the study. The machine learning method performs well in classification between PD patients and normal controls with an accuracy of 85.78%, a specificity of 87.79%, and a sensitivity of 87.64%. The most sensitive biomarkers between PD patients and normal controls are mainly distributed in frontal lobe, parental lobe, limbic lobe, temporal lobe, and central region. The classification performance of our method with multilevel ROI features is significantly improved comparing with other classification methods using single-level features. The proposed method shows promising identification ability for detecting morphometric biomarkers in PD, thus confirming the potentiality of our method in assisting diagnosis of the disease. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A Local Texture-Based Superpixel Feature Coding for Saliency Detection Combined with Global Saliency

    Directory of Open Access Journals (Sweden)

    Bingfei Nan

    2015-12-01

    Full Text Available Because saliency can be used as the prior knowledge of image content, saliency detection has been an active research area in image segmentation, object detection, image semantic understanding and other relevant image-based applications. In the case of saliency detection from cluster scenes, the salient object/region detected needs to not only be distinguished clearly from the background, but, preferably, to also be informative in terms of complete contour and local texture details to facilitate the successive processing. In this paper, a Local Texture-based Region Sparse Histogram (LTRSH model is proposed for saliency detection from cluster scenes. This model uses a combination of local texture patterns and color distribution as well as contour information to encode the superpixels to characterize the local feature of image for region contrast computing. Combining the region contrast as computed with the global saliency probability, a full-resolution salient map, in which the salient object/region detected adheres more closely to its inherent feature, is obtained on the bases of the corresponding high-level saliency spatial distribution as well as on the pixel-level saliency enhancement. Quantitative comparisons with five state-of-the-art saliency detection methods on benchmark datasets are carried out, and the comparative results show that the method we propose improves the detection performance in terms of corresponding measurements.

  3. P2-18: Temporal and Featural Separation of Memory Items Play Little Role for VSTM-Based Change Detection

    Directory of Open Access Journals (Sweden)

    Dae-Gyu Kim

    2012-10-01

    Full Text Available Classic studies of visual short-term memory (VSTM found that presenting memory items either sequentially or simultaneously does not affect recognition accuracy of the remembered items. Other studies also suggest that capacity of VSTM benefits from formation of bound object-based representations leading to no cost of remembering multi-feature items. According to these ideas, we aimed to examine the role of temporal and featural separation of memory items in VSTM change detection, (1 if sample items are separated across different temporal moments and (2 if across different feature dimensions. In a series of change detection experiments, we asked participants to report a change between a sample and a test display with a brief delay in between. In experiment 1, the sample items were split into two sets with a different onset time. In experiment 2, the sample items were split across two different feature dimensions (e.g., half color and half orientation. The change detection accuracy in Experiment 1 showed no substantial drop when the memory items were separated into two onset groups compared to simultaneous onset. The accuracy did not drop either when the features of sample items were split across two different feature groups compared to when were not split. The results indicate that temporal and featural separation of VWM items does not play a significant role for VSTM-based change detection.

  4. Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture

    Science.gov (United States)

    West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID

    2011-09-27

    Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.

  5. Part-based Pedestrian Detection and Feature-based Tracking for Driver Assistance

    DEFF Research Database (Denmark)

    Prioletti, Antonio; Møgelmose, Andreas; Grislieri, Paolo

    2013-01-01

    Detecting pedestrians is still a challenging task for automotive vision systems due to the extreme variability of targets, lighting conditions, occlusion, and high-speed vehicle motion. Much research has been focused on this problem in the last ten years and detectors based on classifiers have...... on a prototype vehicle and offers high performance in terms of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system relies on the combination of a HOG part-based approach, tracking based on a specific optimized feature, and porting on a real prototype....

  6. Nodule detection methods using autocorrelation features on 3D chest CT scans

    International Nuclear Information System (INIS)

    Hara, T.; Zhou, X.; Okura, S.; Fujita, H.; Kiryu, T.; Hoshi, H.

    2007-01-01

    Lung cancer screening using low dose X-ray CT scan has been an acceptable examination to detect cancers at early stage. We have been developing an automated detection scheme for lung nodules on CT scan by using second-order autocorrelation features and the initial performance for small nodules (< 10 mm) shows a high true-positive rate with less than four false-positive marks per case. In this study, an open database of lung images, LIDC (Lung Image Database Consortium), was employed to evaluate our detection scheme as an consistency test. The detection performance for solid and solitary nodules in LIDC, included in the first data set opened by the consortium, was 83% (10/12) true-positive rate with 3.3 false-positive marks per case. (orig.)

  7. Spike detection, characterization, and discrimination using feature analysis software written in LabVIEW.

    Science.gov (United States)

    Stewart, C M; Newlands, S D; Perachio, A A

    2004-12-01

    Rapid and accurate discrimination of single units from extracellular recordings is a fundamental process for the analysis and interpretation of electrophysiological recordings. We present an algorithm that performs detection, characterization, discrimination, and analysis of action potentials from extracellular recording sessions. The program was entirely written in LabVIEW (National Instruments), and requires no external hardware devices or a priori information about action potential shapes. Waveform events are detected by scanning the digital record for voltages that exceed a user-adjustable trigger. Detected events are characterized to determine nine different time and voltage levels for each event. Various algebraic combinations of these waveform features are used as axis choices for 2-D Cartesian plots of events. The user selects axis choices that generate distinct clusters. Multiple clusters may be defined as action potentials by manually generating boundaries of arbitrary shape. Events defined as action potentials are validated by visual inspection of overlain waveforms. Stimulus-response relationships may be identified by selecting any recorded channel for comparison to continuous and average cycle histograms of binned unit data. The algorithm includes novel aspects of feature analysis and acquisition, including higher acquisition rates for electrophysiological data compared to other channels. The program confirms that electrophysiological data may be discriminated with high-speed and efficiency using algebraic combinations of waveform features derived from high-speed digital records.

  8. Detection of corn and weed species by the combination of spectral, shape and textural features

    Science.gov (United States)

    Accurate detection of weeds in farmland can help reduce pesticide use and protect the agricultural environment. To develop intelligent equipment for weed detection, this study used an imaging spectrometer system, which supports micro-scale plant feature analysis by acquiring high-resolution hyper sp...

  9. Modeling and Detecting Feature Interactions among Integrated Services of Home Network Systems

    Science.gov (United States)

    Igaki, Hiroshi; Nakamura, Masahide

    This paper presents a framework for formalizing and detecting feature interactions (FIs) in the emerging smart home domain. We first establish a model of home network system (HNS), where every networked appliance (or the HNS environment) is characterized as an object consisting of properties and methods. Then, every HNS service is defined as a sequence of method invocations of the appliances. Within the model, we next formalize two kinds of FIs: (a) appliance interactions and (b) environment interactions. An appliance interaction occurs when two method invocations conflict on the same appliance, whereas an environment interaction arises when two method invocations conflict indirectly via the environment. Finally, we propose offline and online methods that detect FIs before service deployment and during execution, respectively. Through a case study with seven practical services, it is shown that the proposed framework is generic enough to capture feature interactions in HNS integrated services. We also discuss several FI resolution schemes within the proposed framework.

  10. Scattering features for lung cancer detection in fibered confocal fluorescence microscopy images.

    Science.gov (United States)

    Rakotomamonjy, Alain; Petitjean, Caroline; Salaün, Mathieu; Thiberville, Luc

    2014-06-01

    To assess the feasibility of lung cancer diagnosis using fibered confocal fluorescence microscopy (FCFM) imaging technique and scattering features for pattern recognition. FCFM imaging technique is a new medical imaging technique for which interest has yet to be established for diagnosis. This paper addresses the problem of lung cancer detection using FCFM images and, as a first contribution, assesses the feasibility of computer-aided diagnosis through these images. Towards this aim, we have built a pattern recognition scheme which involves a feature extraction stage and a classification stage. The second contribution relies on the features used for discrimination. Indeed, we have employed the so-called scattering transform for extracting discriminative features, which are robust to small deformations in the images. We have also compared and combined these features with classical yet powerful features like local binary patterns (LBP) and their variants denoted as local quinary patterns (LQP). We show that scattering features yielded to better recognition performances than classical features like LBP and their LQP variants for the FCFM image classification problems. Another finding is that LBP-based and scattering-based features provide complementary discriminative information and, in some situations, we empirically establish that performance can be improved when jointly using LBP, LQP and scattering features. In this work we analyze the joint capability of FCFM images and scattering features for lung cancer diagnosis. The proposed method achieves a good recognition rate for such a diagnosis problem. It also performs well when used in conjunction with other features for other classical medical imaging classification problems. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Vehicle parts detection based on Faster - RCNN with location constraints of vehicle parts feature point

    Science.gov (United States)

    Yang, Liqin; Sang, Nong; Gao, Changxin

    2018-03-01

    Vehicle parts detection plays an important role in public transportation safety and mobility. The detection of vehicle parts is to detect the position of each vehicle part. We propose a new approach by combining Faster RCNN and three level cascaded convolutional neural network (DCNN). The output of Faster RCNN is a series of bounding boxes with coordinate information, from which we can locate vehicle parts. DCNN can precisely predict feature point position, which is the center of vehicle part. We design an output strategy by combining these two results. There are two advantages for this. The quality of the bounding boxes are greatly improved, which means vehicle parts feature point position can be located more precise. Meanwhile we preserve the position relationship between vehicle parts and effectively improve the validity and reliability of the result. By using our algorithm, the performance of the vehicle parts detection improve obviously compared with Faster RCNN.

  12. Using Temporal Covariance of Motion and Geometric Features via Boosting for Human Fall Detection.

    Science.gov (United States)

    Ali, Syed Farooq; Khan, Reamsha; Mahmood, Arif; Hassan, Malik Tahir; Jeon, And Moongu

    2018-06-12

    Fall induced damages are serious incidences for aged as well as young persons. A real-time automatic and accurate fall detection system can play a vital role in timely medication care which will ultimately help to decrease the damages and complications. In this paper, we propose a fast and more accurate real-time system which can detect people falling in videos captured by surveillance cameras. Novel temporal and spatial variance-based features are proposed which comprise the discriminatory motion, geometric orientation and location of the person. These features are used along with ensemble learning strategy of boosting with J48 and Adaboost classifiers. Experiments have been conducted on publicly available standard datasets including Multiple Cameras Fall ( with 2 classes and 3 classes ) and UR Fall Detection achieving percentage accuracies of 99.2, 99.25 and 99.0, respectively. Comparisons with nine state-of-the-art methods demonstrate the effectiveness of the proposed approach on both datasets.

  13. LMD Based Features for the Automatic Seizure Detection of EEG Signals Using SVM.

    Science.gov (United States)

    Zhang, Tao; Chen, Wanzhong

    2017-08-01

    Achieving the goal of detecting seizure activity automatically using electroencephalogram (EEG) signals is of great importance and significance for the treatment of epileptic seizures. To realize this aim, a newly-developed time-frequency analytical algorithm, namely local mean decomposition (LMD), is employed in the presented study. LMD is able to decompose an arbitrary signal into a series of product functions (PFs). Primarily, the raw EEG signal is decomposed into several PFs, and then the temporal statistical and non-linear features of the first five PFs are calculated. The features of each PF are fed into five classifiers, including back propagation neural network (BPNN), K-nearest neighbor (KNN), linear discriminant analysis (LDA), un-optimized support vector machine (SVM) and SVM optimized by genetic algorithm (GA-SVM), for five classification cases, respectively. Confluent features of all PFs and raw EEG are further passed into the high-performance GA-SVM for the same classification tasks. Experimental results on the international public Bonn epilepsy EEG dataset show that the average classification accuracy of the presented approach are equal to or higher than 98.10% in all the five cases, and this indicates the effectiveness of the proposed approach for automated seizure detection.

  14. Bathymetric and velocimetric surveys at highway bridges crossing the Missouri River in and into Missouri during summer flooding, July-August 2011

    Science.gov (United States)

    Huizinga, Richard J.

    2012-01-01

    Bathymetric and velocimetric surveys were conducted by the U.S. Geological Survey, in cooperation with the Kansas and Missouri Departments of Transportation, in the vicinity of 36 bridges at 27 highway crossings of the Missouri River between Brownville, Nebraska and St. Louis, Missouri, from July 13 through August 3, 2011, during a summer flood. A multibeam echo sounder mapping system was used to obtain channel-bed elevations for river reaches ranging from 1,350 to 1,860 feet and extending across the active channel of the Missouri River. These bathymetric scans provide a "snapshot" of the channel conditions at the time of the surveys and provide characteristics of scour holes that may be useful in the development of predictive guidelines or equations for scour holes. These data also may be used by the Kansas and Missouri Departments of Transportation to assess the bridges for stability and integrity issues with respect to bridge scour during floods. Bathymetric data were collected around every pier that was in water, except those at the edge of water, in extremely shallow water, or surrounded by debris rafts. Scour holes were present at most piers for which bathymetry could be obtained, except at piers on channel banks, those near or embedded in lateral or longitudinal spur dikes, and those on exposed bedrock outcrops. Scour holes observed at the surveyed bridges were examined with respect to depth and shape. Although exposure of parts of foundational support elements was observed at several piers, at most sites the exposure likely can be considered minimal compared to the overall substructure that remains buried in bed material; however, there were several notable exceptions where the bed material thickness between the bottom of the scour hole and bedrock was less than 6 feet. Such substantial exposure of usually buried substructural elements may warrant special observation in future flood events. Previous bathymetric surveys had been done at several of the sites

  15. Fault detection of Tennessee Eastman process based on topological features and SVM

    Science.gov (United States)

    Zhao, Huiyang; Hu, Yanzhu; Ai, Xinbo; Hu, Yu; Meng, Zhen

    2018-03-01

    Fault detection in industrial process is a popular research topic. Although the distributed control system(DCS) has been introduced to monitor the state of industrial process, it still cannot satisfy all the requirements for fault detection of all the industrial systems. In this paper, we proposed a novel method based on topological features and support vector machine(SVM), for fault detection of industrial process. The proposed method takes global information of measured variables into account by complex network model and predicts whether a system has generated some faults or not by SVM. The proposed method can be divided into four steps, i.e. network construction, network analysis, model training and model testing respectively. Finally, we apply the model to Tennessee Eastman process(TEP). The results show that this method works well and can be a useful supplement for fault detection of industrial process.

  16. Automatic Ship Detection in Remote Sensing Images from Google Earth of Complex Scenes Based on Multiscale Rotation Dense Feature Pyramid Networks

    Directory of Open Access Journals (Sweden)

    Xue Yang

    2018-01-01

    Full Text Available Ship detection has been playing a significant role in the field of remote sensing for a long time, but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection, and the redundancy of the detection region. In order to solve these problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN which can effectively detect ships in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN, which is aimed at solving problems resulting from the narrow width of the ship. Compared with previous multiscale detectors such as Feature Pyramid Network (FPN, DFPN builds high-level semantic feature-maps for all scales by means of dense connections, through which feature propagation is enhanced and feature reuse is encouraged. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multiscale region of interest (ROI Align for the purpose of maintaining the completeness of the semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has state-of-the-art performance.

  17. LC-IMS-MS Feature Finder: detecting multidimensional liquid chromatography, ion mobility and mass spectrometry features in complex datasets.

    Science.gov (United States)

    Crowell, Kevin L; Slysz, Gordon W; Baker, Erin S; LaMarche, Brian L; Monroe, Matthew E; Ibrahim, Yehia M; Payne, Samuel H; Anderson, Gordon A; Smith, Richard D

    2013-11-01

    The addition of ion mobility spectrometry to liquid chromatography-mass spectrometry experiments requires new, or updated, software tools to facilitate data processing. We introduce a command line software application LC-IMS-MS Feature Finder that searches for molecular ion signatures in multidimensional liquid chromatography-ion mobility spectrometry-mass spectrometry (LC-IMS-MS) data by clustering deisotoped peaks with similar monoisotopic mass, charge state, LC elution time and ion mobility drift time values. The software application includes an algorithm for detecting and quantifying co-eluting chemical species, including species that exist in multiple conformations that may have been separated in the IMS dimension. LC-IMS-MS Feature Finder is available as a command-line tool for download at http://omics.pnl.gov/software/LC-IMS-MS_Feature_Finder.php. The Microsoft.NET Framework 4.0 is required to run the software. All other dependencies are included with the software package. Usage of this software is limited to non-profit research to use (see README). rds@pnnl.gov. Supplementary data are available at Bioinformatics online.

  18. Acoustic Event Detection in Multichannel Audio Using Gated Recurrent Neural Networks with High‐Resolution Spectral Features

    Directory of Open Access Journals (Sweden)

    Hyoung‐Gook Kim

    2017-12-01

    Full Text Available Recently, deep recurrent neural networks have achieved great success in various machine learning tasks, and have also been applied for sound event detection. The detection of temporally overlapping sound events in realistic environments is much more challenging than in monophonic detection problems. In this paper, we present an approach to improve the accuracy of polyphonic sound event detection in multichannel audio based on gated recurrent neural networks in combination with auditory spectral features. In the proposed method, human hearing perception‐based spatial and spectral‐domain noise‐reduced harmonic features are extracted from multichannel audio and used as high‐resolution spectral inputs to train gated recurrent neural networks. This provides a fast and stable convergence rate compared to long short‐term memory recurrent neural networks. Our evaluation reveals that the proposed method outperforms the conventional approaches.

  19. Genetic Particle Swarm Optimization–Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection

    Science.gov (United States)

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-01-01

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm. PMID:27483285

  20. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    Science.gov (United States)

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-07-30

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.

  1. Testing of Haar-Like Feature in Region of Interest Detection for Automated Target Recognition (ATR) System

    Science.gov (United States)

    Zhang, Yuhan; Lu, Dr. Thomas

    2010-01-01

    The objectives of this project were to develop a ROI (Region of Interest) detector using Haar-like feature similar to the face detection in Intel's OpenCV library, implement it in Matlab code, and test the performance of the new ROI detector against the existing ROI detector that uses Optimal Trade-off Maximum Average Correlation Height filter (OTMACH). The ROI detector included 3 parts: 1, Automated Haar-like feature selection in finding a small set of the most relevant Haar-like features for detecting ROIs that contained a target. 2, Having the small set of Haar-like features from the last step, a neural network needed to be trained to recognize ROIs with targets by taking the Haar-like features as inputs. 3, using the trained neural network from the last step, a filtering method needed to be developed to process the neural network responses into a small set of regions of interests. This needed to be coded in Matlab. All the 3 parts needed to be coded in Matlab. The parameters in the detector needed to be trained by machine learning and tested with specific datasets. Since OpenCV library and Haar-like feature were not available in Matlab, the Haar-like feature calculation needed to be implemented in Matlab. The codes for Adaptive Boosting and max/min filters in Matlab could to be found from the Internet but needed to be integrated to serve the purpose of this project. The performance of the new detector was tested by comparing the accuracy and the speed of the new detector against the existing OTMACH detector. The speed was referred as the average speed to find the regions of interests in an image. The accuracy was measured by the number of false positives (false alarms) at the same detection rate between the two detectors.

  2. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    Science.gov (United States)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  3. Improved Feature Detection in Fused Intensity-Range Images with Complex SIFT (ℂSIFT

    Directory of Open Access Journals (Sweden)

    Boris Jutzi

    2011-09-01

    Full Text Available The real and imaginary parts are proposed as an alternative to the usual Polar representation of complex-valued images. It is proven that the transformation from Polar to Cartesian representation contributes to decreased mutual information, and hence to greater distinctiveness. The Complex Scale-Invariant Feature Transform (ℂSIFT detects distinctive features in complex-valued images. An evaluation method for estimating the uniformity of feature distributions in complex-valued images derived from intensity-range images is proposed. In order to experimentally evaluate the proposed methodology on intensity-range images, three different kinds of active sensing systems were used: Range Imaging, Laser Scanning, and Structured Light Projection devices (PMD CamCube 2.0, Z+F IMAGER 5003, Microsoft Kinect.

  4. A change detection method for remote sensing image based on LBP and SURF feature

    Science.gov (United States)

    Hu, Lei; Yang, Hao; Li, Jin; Zhang, Yun

    2018-04-01

    Finding the change in multi-temporal remote sensing image is important in many the image application. Because of the infection of climate and illumination, the texture of the ground object is more stable relative to the gray in high-resolution remote sensing image. And the texture features of Local Binary Patterns (LBP) and Speeded Up Robust Features (SURF) are outstanding in extracting speed and illumination invariance. A method of change detection for matched remote sensing image pair is present, which compares the similarity by LBP and SURF to detect the change and unchanged of the block after blocking the image. And region growing is adopted to process the block edge zone. The experiment results show that the method can endure some illumination change and slight texture change of the ground object.

  5. A biologically inspired scale-space for illumination invariant feature detection

    International Nuclear Information System (INIS)

    Vonikakis, Vasillios; Chrysostomou, Dimitrios; Kouskouridas, Rigas; Gasteratos, Antonios

    2013-01-01

    This paper presents a new illumination invariant operator, combining the nonlinear characteristics of biological center-surround cells with the classic difference of Gaussians operator. It specifically targets the underexposed image regions, exhibiting increased sensitivity to low contrast, while not affecting performance in the correctly exposed ones. The proposed operator can be used to create a scale-space, which in turn can be a part of a SIFT-based detector module. The main advantage of this illumination invariant scale-space is that, using just one global threshold, keypoints can be detected in both dark and bright image regions. In order to evaluate the degree of illumination invariance that the proposed, as well as other, existing, operators exhibit, a new benchmark dataset is introduced. It features a greater variety of imaging conditions, compared to existing databases, containing real scenes under various degrees and combinations of uniform and non-uniform illumination. Experimental results show that the proposed detector extracts a greater number of features, with a high level of repeatability, compared to other approaches, for both uniform and non-uniform illumination. This, along with its simple implementation, renders the proposed feature detector particularly appropriate for outdoor vision systems, working in environments under uncontrolled illumination conditions. (paper)

  6. Anthropogenic influence on recent bathymetric change in west-central San Francisco Bay

    Science.gov (United States)

    Barnard, Patrick L.; Kvitek, Rikk G.

    2010-01-01

    Two multibeam sonar surveys of west-central San Francisco Bay, California, were conducted in 1997 and 2008. Bathymetric change analysis between the two surveys indicates a loss of 14.1 million cubic meters (-3.1 cm/yr-1) of sediment during this time period, representing an approximately three-fold acceleration of the rate that was observed from prior depth change analysis from 1947 to 1979 for all of Central Bay, using more spatially coarse National Ocean Service (NOS) soundings. The portions of the overlapping survey areas between 1997 and 2008 designated as aggregate mining lease sites lost sediment at five times the rate of the remainder of west-central San Francisco Bay. Despite covering only 28% of the analysis area, volume change within leasing areas accounted for 9.2 million cubic meters of sediment loss, while the rest of the area lost 4.9 million cubic meters of sediment. The uncertainty of this recent analysis is more tightly constrained due to more stringent controls on vertical and horizontal position via tightly coupled, inertially aided differential Global Positioning Systems (GPS) solutions for survey vessel trajectory that virtually eliminate inaccuracies from traditional tide modeling and vessel motion artifacts. Further, quantification of systematic depth measurement error can now be calculated through comparison of static surfaces (e.g., bedrock) between surveys using seafloor habitat maps based on acoustic backscatter measurements and ground-truthing with grab samples and underwater video. Sediment loss in the entire San Francisco Bay Coastal System during the last half-century,as estimated from a series of bathymetric change studies, is 240 million cubic meters, and most of this is believed to be coarse sediment (i.e., sand and gravel) from Central Bay and the San Francisco Bar, which is likely to limit the sand supply to adjacent, open-coast beaches. This hypothesis is supported by a calibrated numerical model in a related study that indicates

  7. Multivariate anomaly detection for Earth observations: a comparison of algorithms and feature extraction techniques

    Directory of Open Access Journals (Sweden)

    M. Flach

    2017-08-01

    Full Text Available Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach and their combinations (ensembles that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to

  8. The Investigation of Active Tectonism Offshore Cide-Sinop, Southern Black Sea by Seismic Reflection and Bathymetric Data

    Science.gov (United States)

    Alp, Y. I.; Ocakoglu, N.; Kılıc, F.; Ozel, A. O.

    2017-12-01

    The active tectonism offshore Cide-Sinop at the Southern Black Sea shelf area was first time investigated by multi-beam bathymetric and multi-channel seismic reflection data under the Research Project of The Scientific and Technological Research Council of Turkey (TUBİTAK-ÇAYDAG-114Y057). The multi-channel seismic reflection data of about 700 km length were acquired in 1991 by Turkish Petroleum Company (TP). Multibeam bathymetric data were collected between 2002-2008 by the Turkish Navy, Department of Navigation, Hydrography and Oceanography (TN-DNHO). Conventional data processing steps were applied as follows: in-line geometry definition, shot-receiver static correction, editing, shot muting, gain correction, CDP sorting, velocity analysis, NMO correction, muting, stacking, predictive deconvolution, band-pass filtering, finite-difference time migration, and automatic gain correction. Offshore area is represented by a quite smooth and large shelf plain with an approx. 25 km wide and the water depth of about -100 m. The shelf gently deepens and it is limited by the shelf break with average of -120 m contour. The seafloor morphology is charasterised by an erosional surface. Structurally, E-W trending strike-slip faults with generally compression components and reverse/thrust faults have been regionally mapped for the first time. Most of these faults deform all seismic units and reach the seafloor delimiting the morphological highs and submarine plains. Thus, these faults are intepreted as active faults. These results support the idea that the area is under the active compressional tectonic regime

  9. AFSC/RACE/GAP/Prescott: Norton Sound Features

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — We assembled approximately 230,000 National Ocean Service (NOS) bathymetric soundings from 39 lead-line and single-beam echosounder hydrographic surveys conducted...

  10. Computer-aided detection of renal calculi from noncontrast CT images using TV-flow and MSER features

    Science.gov (United States)

    Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B.; Linguraru, Marius George; Yao, Jianhua; Summers, Ronald M.

    2015-01-01

    Purpose: Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. Methods: The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients and compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. Results: At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e − 3) on all calculi from 1 to 433 mm3 in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Conclusions: Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis. PMID:25563255

  11. Explaining bathymetric diversity patterns in marine benthic invertebrates and demersal fishes: physiological contributions to adaptation of life at depth.

    Science.gov (United States)

    Brown, Alastair; Thatje, Sven

    2014-05-01

    Bathymetric biodiversity patterns of marine benthic invertebrates and demersal fishes have been identified in the extant fauna of the deep continental margins. Depth zonation is widespread and evident through a transition between shelf and slope fauna from the shelf break to 1000 m, and a transition between slope and abyssal fauna from 2000 to 3000 m; these transitions are characterised by high species turnover. A unimodal pattern of diversity with depth peaks between 1000 and 3000 m, despite the relatively low area represented by these depths. Zonation is thought to result from the colonisation of the deep sea by shallow-water organisms following multiple mass extinction events throughout the Phanerozoic. The effects of low temperature and high pressure act across hierarchical levels of biological organisation and appear sufficient to limit the distributions of such shallow-water species. Hydrostatic pressures of bathyal depths have consistently been identified experimentally as the maximum tolerated by shallow-water and upper bathyal benthic invertebrates at in situ temperatures, and adaptation appears required for passage to deeper water in both benthic invertebrates and demersal fishes. Together, this suggests that a hyperbaric and thermal physiological bottleneck at bathyal depths contributes to bathymetric zonation. The peak of the unimodal diversity-depth pattern typically occurs at these depths even though the area represented by these depths is relatively low. Although it is recognised that, over long evolutionary time scales, shallow-water diversity patterns are driven by speciation, little consideration has been given to the potential implications for species distribution patterns with depth. Molecular and morphological evidence indicates that cool bathyal waters are the primary site of adaptive radiation in the deep sea, and we hypothesise that bathymetric variation in speciation rates could drive the unimodal diversity-depth pattern over time. Thermal

  12. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  13. Rotation-invariant features for multi-oriented text detection in natural images.

    Directory of Open Access Journals (Sweden)

    Cong Yao

    Full Text Available Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes.

  14. Radiomic features for prostate cancer detection on MRI differ between the transition and peripheral zones: Preliminary findings from a multi-institutional study.

    Science.gov (United States)

    Ginsburg, Shoshana B; Algohary, Ahmad; Pahwa, Shivani; Gulani, Vikas; Ponsky, Lee; Aronen, Hannu J; Boström, Peter J; Böhm, Maret; Haynes, Anne-Maree; Brenner, Phillip; Delprado, Warick; Thompson, James; Pulbrock, Marley; Taimen, Pekka; Villani, Robert; Stricker, Phillip; Rastinehad, Ardeshir R; Jambor, Ivan; Madabhushi, Anant

    2017-07-01

    To evaluate in a multi-institutional study whether radiomic features useful for prostate cancer (PCa) detection from 3 Tesla (T) multi-parametric MRI (mpMRI) in the transition zone (TZ) differ from those in the peripheral zone (PZ). 3T mpMRI, including T2-weighted (T2w), apparent diffusion coefficient (ADC) maps, and dynamic contrast-enhanced MRI (DCE-MRI), were retrospectively obtained from 80 patients at three institutions. This study was approved by the institutional review board of each participating institution. First-order statistical, co-occurrence, and wavelet features were extracted from T2w MRI and ADC maps, and contrast kinetic features were extracted from DCE-MRI. Feature selection was performed to identify 10 features for PCa detection in the TZ and PZ, respectively. Two logistic regression classifiers used these features to detect PCa and were evaluated by area under the receiver-operating characteristic curve (AUC). Classifier performance was compared with a zone-ignorant classifier. Radiomic features that were identified as useful for PCa detection differed between TZ and PZ. When classification was performed on a per-voxel basis, a PZ-specific classifier detected PZ tumors on an independent test set with significantly higher accuracy (AUC = 0.61-0.71) than a zone-ignorant classifier trained to detect cancer throughout the entire prostate (P  0.14) were obtained for all institutions. A zone-aware classifier significantly improves the accuracy of cancer detection in the PZ. 3 Technical Efficacy: Stage 2 J. MAGN. RESON. IMAGING 2017;46:184-193. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Olivier Aycard

    2004-12-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T-intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  16. On-Line Fault Detection in Wind Turbine Transmission System using Adaptive Filter and Robust Statistical Features

    Directory of Open Access Journals (Sweden)

    Mark Frogley

    2013-01-01

    Full Text Available To reduce the maintenance cost, avoid catastrophic failure, and improve the wind transmission system reliability, online condition monitoring system is critical important. In the real applications, many rotating mechanical faults, such as bearing surface defect, gear tooth crack, chipped gear tooth and so on generate impulsive signals. When there are these types of faults developing inside rotating machinery, each time the rotating components pass over the damage point, an impact force could be generated. The impact force will cause a ringing of the support structure at the structural natural frequency. By effectively detecting those periodic impulse signals, one group of rotating machine faults could be detected and diagnosed. However, in real wind turbine operations, impulsive fault signals are usually relatively weak to the background noise and vibration signals generated from other healthy components, such as shaft, blades, gears and so on. Moreover, wind turbine transmission systems work under dynamic operating conditions. This will further increase the difficulties in fault detection and diagnostics. Therefore, developing advanced signal processing methods to enhance the impulsive signals is in great needs.In this paper, an adaptive filtering technique will be applied for enhancing the fault impulse signals-to-noise ratio in wind turbine gear transmission systems. Multiple statistical features designed to quantify the impulsive signals of the processed signal are extracted for bearing fault detection. The multiple dimensional features are then transformed into one dimensional feature. A minimum error rate classifier will be designed based on the compressed feature to identify the gear transmission system with defect. Real wind turbine vibration signals will be used to demonstrate the effectiveness of the presented methodology.

  17. Flying control of small-type helicopter by detecting its in-air natural features

    Directory of Open Access Journals (Sweden)

    Chinthaka Premachandra

    2015-05-01

    Full Text Available Control of a small type helicopter is an interesting research area in unmanned aerial vehicle development. This study aims to detect a more typical helicopter unequipped with markers as a means by which to resolve the various issues of the prior studies. Accordingly, we propose a method of detecting the helicopter location and pose through using an infrastructure camera to recognize its in-air natural features such as ellipse traced by the rotation of the helicopter's propellers. A single-rotor system helicopter was used as the controlled airframe in our experiments. Here, helicopter location is measured by detecting the main rotor ellipse center and pose is measured following relationship between the main rotor ellipse and the tail rotor ellipse. Following these detection results we confirmed the hovering control possibility of the helicopter through experiments.

  18. Inverted dipole feature in directional detection of exothermic dark matter

    International Nuclear Information System (INIS)

    Bozorgnia, Nassim; Gelmini, Graciela B.; Gondolo, Paolo

    2017-01-01

    Directional dark matter detection attempts to measure the direction of motion of nuclei recoiling after having interacted with dark matter particles in the halo of our Galaxy. Due to Earth's motion with respect to the Galaxy, the dark matter flux is concentrated around a preferential direction. An anisotropy in the recoil direction rate is expected as an unmistakable signature of dark matter. The average nuclear recoil direction is expected to coincide with the average direction of dark matter particles arriving to Earth. Here we point out that for a particular type of dark matter, inelastic exothermic dark matter, the mean recoil direction as well as a secondary feature, a ring of maximum recoil rate around the mean recoil direction, could instead be opposite to the average dark matter arrival direction. Thus, the detection of an average nuclear recoil direction opposite to the usually expected direction would constitute a spectacular experimental confirmation of this type of dark matter.

  19. Pre-trained convolutional neural networks as feature extractors for tuberculosis detection.

    Science.gov (United States)

    Lopes, U K; Valiati, J F

    2017-10-01

    It is estimated that in 2015, approximately 1.8 million people infected by tuberculosis died, most of them in developing countries. Many of those deaths could have been prevented if the disease had been detected at an earlier stage, but the most advanced diagnosis methods are still cost prohibitive for mass adoption. One of the most popular tuberculosis diagnosis methods is the analysis of frontal thoracic radiographs; however, the impact of this method is diminished by the need for individual analysis of each radiography by properly trained radiologists. Significant research can be found on automating diagnosis by applying computational techniques to medical images, thereby eliminating the need for individual image analysis and greatly diminishing overall costs. In addition, recent improvements on deep learning accomplished excellent results classifying images on diverse domains, but its application for tuberculosis diagnosis remains limited. Thus, the focus of this work is to produce an investigation that will advance the research in the area, presenting three proposals to the application of pre-trained convolutional neural networks as feature extractors to detect the disease. The proposals presented in this work are implemented and compared to the current literature. The obtained results are competitive with published works demonstrating the potential of pre-trained convolutional networks as medical image feature extractors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Object-Based Change Detection in Urban Areas from High Spatial Resolution Images Based on Multiple Features and Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2018-02-01

    Full Text Available To improve the accuracy of change detection in urban areas using bi-temporal high-resolution remote sensing images, a novel object-based change detection scheme combining multiple features and ensemble learning is proposed in this paper. Image segmentation is conducted to determine the objects in bi-temporal images separately. Subsequently, three kinds of object features, i.e., spectral, shape and texture, are extracted. Using the image differencing process, a difference image is generated and used as the input for nonlinear supervised classifiers, including k-nearest neighbor, support vector machine, extreme learning machine and random forest. Finally, the results of multiple classifiers are integrated using an ensemble rule called weighted voting to generate the final change detection result. Experimental results of two pairs of real high-resolution remote sensing datasets demonstrate that the proposed approach outperforms the traditional methods in terms of overall accuracy and generates change detection maps with a higher number of homogeneous regions in urban areas. Moreover, the influences of segmentation scale and the feature selection strategy on the change detection performance are also analyzed and discussed.

  1. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    Science.gov (United States)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  2. CoMIC: Good features for detection and matching at object boundaries

    OpenAIRE

    Ravindran, Swarna Kamlam; Mittal, Anurag

    2014-01-01

    Feature or interest points typically use information aggregation in 2D patches which does not remain stable at object boundaries when there is object motion against a significantly varying background. Level or iso-intensity curves are much more stable under such conditions, especially the longer ones. In this paper, we identify stable portions on long iso-curves and detect corners on them. Further, the iso-curve associated with a corner is used to discard portions from the background and impr...

  3. Spectral feature characterization methods for blood stain detection in crime scene backgrounds

    Science.gov (United States)

    Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.

    2016-05-01

    Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.

  4. Bright Retinal Lesions Detection using Colour Fundus Images Containing Reflective Features

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Karnowski, Thomas Paul [ORNL; Chaum, Edward [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK)

    2009-01-01

    In the last years the research community has developed many techniques to detect and diagnose diabetic retinopathy with retinal fundus images. This is a necessary step for the implementation of a large scale screening effort in rural areas where ophthalmologists are not available. In the United States of America, the incidence of diabetes is worryingly increasing among the young population. Retina fundus images of patients younger than 20 years old present a high amount of reflection due to the Nerve Fibre Layer (NFL), the younger the patient the more these reflections are visible. To our knowledge we are not aware of algorithms able to explicitly deal with this type of reflection artefact. This paper presents a technique to detect bright lesions also in patients with a high degree of reflective NFL. First, the candidate bright lesions are detected using image equalization and relatively simple histogram analysis. Then, a classifier is trained using texture descriptor (Multi-scale Local Binary Patterns) and other features in order to remove the false positives in the lesion detection. Finally, the area of the lesions is used to diagnose diabetic retinopathy. Our database consists of 33 images from a telemedicine network currently developed. When determining moderate to high diabetic retinopathy using the bright lesions detected the algorithm achieves a sensitivity of 100% at a specificity of 100% using hold-one-out testing.

  5. Regions of micro-calcifications clusters detection based on new features from imbalance data in mammograms

    Science.gov (United States)

    Wang, Keju; Dong, Min; Yang, Zhen; Guo, Yanan; Ma, Yide

    2017-02-01

    Breast cancer is the most common cancer among women. Micro-calcification cluster on X-ray mammogram is one of the most important abnormalities, and it is effective for early cancer detection. Surrounding Region Dependence Method (SRDM), a statistical texture analysis method is applied for detecting Regions of Interest (ROIs) containing microcalcifications. Inspired by the SRDM, we present a method that extract gray and other features which are effective to predict the positive and negative regions of micro-calcifications clusters in mammogram. By constructing a set of artificial images only containing micro-calcifications, we locate the suspicious pixels of calcifications of a SRDM matrix in original image map. Features are extracted based on these pixels for imbalance date and then the repeated random subsampling method and Random Forest (RF) classifier are used for classification. True Positive (TP) rate and False Positive (FP) can reflect how the result will be. The TP rate is 90% and FP rate is 88.8% when the threshold q is 10. We draw the Receiver Operating Characteristic (ROC) curve and the Area Under the ROC Curve (AUC) value reaches 0.9224. The experiment indicates that our method is effective. A novel regions of micro-calcifications clusters detection method is developed, which is based on new features for imbalance data in mammography, and it can be considered to help improving the accuracy of computer aided diagnosis breast cancer.

  6. Bathymetric and longitudinal distribution analyysis of the rockfish Helicolenus Dactylopterus (Delaroche, 1809 in the southern Tyrrhenian Sea (central Mediterranean

    Directory of Open Access Journals (Sweden)

    T. ROMEO

    2009-06-01

    Full Text Available This study provides information on bathymetric and longitudinal distribution heterogeneity of the rockfish Helicolenus dactylopterus in the southern Tyrrhenian Sea. Data were drawn from experimental bottom trawl (1996-2002 plus bottom trap (2001-02 surveys. The frequency of occurrence and mean relative density (N/km2 and biomass (kg/km2 indexes were calculated for two survey seasons (spring and autumn, four geographic sectors and three depth strata. MANOVA was used to test fish abundance among years, sectors and strata. Analysis of the length-frequency distributions was carried out by two-way (gears and depths ANOVA, post hoc multiple comparisons for testing differences among depths and Student’s t test for testing differences between gears. Length-weight relationship was also estimated and the allometric coefficient was tested with the Student’s t test. The results showed a significant positive bathymetric gradient of sizes both for trawl and trap surveys; at same depths, fish caught by traps were significantly longer than those caught by trawl. In spring surveys, significant differences were found among strata for both abundance indexes; in autumn surveys, significant differences between depth strata were found only for density indices. The distribution and abundance patterns of H. dactylopterus along the southern Tyrrhenian Sea was homogeneous among sectors. Length-weight relationship showed a significant positive allometric growth.

  7. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    Directory of Open Access Journals (Sweden)

    P. Amudha

    2015-01-01

    Full Text Available Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC with Enhanced Particle Swarm Optimization (EPSO to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup’99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.

  8. Bathymetric and velocimetric surveys at highway bridges crossing the Missouri and Mississippi Rivers near St. Louis, Missouri, May 23–27, 2016

    Science.gov (United States)

    Huizinga, Richard J.

    2017-09-26

    Bathymetric and velocimetric data were collected by the U.S. Geological Survey, in cooperation with the Missouri Department of Transportation, near 13 bridges at 8 highway crossings of the Missouri and Mississippi Rivers in the greater St. Louis, Missouri, area from May 23 to 27, 2016. A multibeam echosounder mapping system was used to obtain channel-bed elevations for river reaches ranging from 1,640 to 1,970 feet longitudinally and extending laterally across the active channel from bank to bank during low to moderate flood flow conditions. These bathymetric surveys indicate the channel conditions at the time of the surveys and provide characteristics of scour holes that may be useful in the development of predictive guidelines or equations for scour holes. These data also may be useful to the Missouri Department of Transportation as a low to moderate flood flow comparison to help assess the bridges for stability and integrity issues with respect to bridge scour during floods.Bathymetric data were collected around every pier that was in water, except those at the edge of water, and scour holes were observed at most surveyed piers. The observed scour holes at the surveyed bridges were examined with respect to shape and depth.The frontal slope values determined for scour holes observed in the current (2016) study generally are similar to recommended values in the literature and to values determined for scour holes in previous bathymetric surveys. Several of the structures had piers that were skewed to primary approach flow, as indicated by the scour hole being longer on the side of the pier with impinging flow, and some amount of deposition on the leeward side, as typically has been observed at piers skewed to approach flow; however, at most skewed piers in the current (2016) study, the scour hole was deeper on the leeward side of the pier. At most of these piers, the angled approach flow was the result of a deflection or contraction of flow caused by a spur dike

  9. Lake Bathymetric Aquatic Vegetation

    Data.gov (United States)

    Minnesota Department of Natural Resources — Aquatic vegetation represented as polygon features, coded with vegetation type (emergent, submergent, etc.) and field survey date. Polygons were digitized from...

  10. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching.

    Science.gov (United States)

    Wang, Guohua; Liu, Qiong

    2015-12-21

    Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians' head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians' size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only.

  11. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching

    Directory of Open Access Journals (Sweden)

    Guohua Wang

    2015-12-01

    Full Text Available Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians’ head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians’ size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only.

  12. Detection of braking intention in diverse situations during simulated driving based on EEG feature combination.

    Science.gov (United States)

    Kim, Il-Hwa; Kim, Jeong-Woo; Haufe, Stefan; Lee, Seong-Whan

    2015-02-01

    We developed a simulated driving environment for studying neural correlates of emergency braking in diversified driving situations. We further investigated to what extent these neural correlates can be used to detect a participant's braking intention prior to the behavioral response. We measured electroencephalographic (EEG) and electromyographic signals during simulated driving. Fifteen participants drove a virtual vehicle and were exposed to several kinds of traffic situations in a simulator system, while EEG signals were measured. After that, we extracted characteristic features to categorize whether the driver intended to brake or not. Our system shows excellent detection performance in a broad range of possible emergency situations. In particular, we were able to distinguish three different kinds of emergency situations (sudden stop of a preceding vehicle, sudden cutting-in of a vehicle from the side and unexpected appearance of a pedestrian) from non-emergency (soft) braking situations, as well as from situations in which no braking was required, but the sensory stimulation was similar to stimulations inducing an emergency situation (e.g., the sudden stop of a vehicle on a neighboring lane). We proposed a novel feature combination comprising movement-related potentials such as the readiness potential, event-related desynchronization features besides the event-related potentials (ERP) features used in a previous study. The performance of predicting braking intention based on our proposed feature combination was superior compared to using only ERP features. Our study suggests that emergency situations are characterized by specific neural patterns of sensory perception and processing, as well as motor preparation and execution, which can be utilized by neurotechnology based braking assistance systems.

  13. Detection of braking intention in diverse situations during simulated driving based on EEG feature combination

    Science.gov (United States)

    Kim, Il-Hwa; Kim, Jeong-Woo; Haufe, Stefan; Lee, Seong-Whan

    2015-02-01

    Objective. We developed a simulated driving environment for studying neural correlates of emergency braking in diversified driving situations. We further investigated to what extent these neural correlates can be used to detect a participant's braking intention prior to the behavioral response. Approach. We measured electroencephalographic (EEG) and electromyographic signals during simulated driving. Fifteen participants drove a virtual vehicle and were exposed to several kinds of traffic situations in a simulator system, while EEG signals were measured. After that, we extracted characteristic features to categorize whether the driver intended to brake or not. Main results. Our system shows excellent detection performance in a broad range of possible emergency situations. In particular, we were able to distinguish three different kinds of emergency situations (sudden stop of a preceding vehicle, sudden cutting-in of a vehicle from the side and unexpected appearance of a pedestrian) from non-emergency (soft) braking situations, as well as from situations in which no braking was required, but the sensory stimulation was similar to stimulations inducing an emergency situation (e.g., the sudden stop of a vehicle on a neighboring lane). Significance. We proposed a novel feature combination comprising movement-related potentials such as the readiness potential, event-related desynchronization features besides the event-related potentials (ERP) features used in a previous study. The performance of predicting braking intention based on our proposed feature combination was superior compared to using only ERP features. Our study suggests that emergency situations are characterized by specific neural patterns of sensory perception and processing, as well as motor preparation and execution, which can be utilized by neurotechnology based braking assistance systems.

  14. Archaeological Feature Detection from Archive Aerial Photography with a Sfm-Mvs and Image Enhancement Pipeline

    Science.gov (United States)

    Peppa, M. V.; Mills, J. P.; Fieber, K. D.; Haynes, I.; Turner, S.; Turner, A.; Douglas, M.; Bryan, P. G.

    2018-05-01

    Understanding and protecting cultural heritage involves the detection and long-term documentation of archaeological remains alongside the spatio-temporal analysis of their landscape evolution. Archive aerial photography can illuminate traces of ancient features which typically appear with different brightness values from their surrounding environment, but are not always well defined. This research investigates the implementation of the Structure-from-Motion - Multi-View Stereo image matching approach with an image enhancement algorithm to derive three epochs of orthomosaics and digital surface models from visible and near infrared historic aerial photography. The enhancement algorithm uses decorrelation stretching to improve the contrast of the orthomosaics so as archaeological features are better detected. Results include 2D / 3D locations of detected archaeological traces stored into a geodatabase for further archaeological interpretation and correlation with benchmark observations. The study also discusses the merits and difficulties of the process involved. This research is based on a European-wide project, entitled "Cultural Heritage Through Time", and the case study research was carried out as a component of the project in the UK.

  15. Early detection of breast cancer mass lesions by mammogram segmentation images based on texture features

    International Nuclear Information System (INIS)

    Mahmood, F.H.

    2012-01-01

    Mammography is at present one of the available method for early detection of masses or abnormalities which is related to breast cancer.The calcifications. The challenge lies in early and accurate detection to overcome the development of breast cancer that affects more and more women throughout the world. Breast cancer is diagnosed at advanced stages with the help of the digital mammogram images. Masses appear in a mammogram as fine, granular clusters, which are often difficult to identify in a raw mammogram. The incidence of breast cancer in women has increased significantly in recent years. This paper proposes a computer aided diagnostic system for the extraction of features like mass lesions in mammograms for early detection of breast cancer. The proposed technique is based on a four-step procedure: (a) the preprocessing of the image is done, (b) regions of interest (ROI) specification, (c) supervised segmentation method includes two to stages performed using the minimum distance (M D) criterion, and (d) feature extraction based on Gray level Co-occurrence matrices GLC M for the identification of mass lesions. The method suggested for the detection of mass lesions from mammogram image segmentation and analysis was tested over several images taken from A L-llwiya Hospital in Baghdad, Iraq.The proposed technique shows better results.

  16. An object-oriented feature-based design system face-based detection of feature interactions

    International Nuclear Information System (INIS)

    Ariffin Abdul Razak

    1999-01-01

    This paper presents an object-oriented, feature-based design system which supports the integration of design and manufacture by ensuring that part descriptions fully account for any feature interactions. Manufacturing information is extracted from the feature descriptions in the form of volumes and Tool Access Directions, TADs. When features interact, both volumes and TADs are updated. This methodology has been demonstrated by developing a prototype system in which ACIS attributes are used to record feature information within the data structure of the solid model. The system implemented in the C++ programming language and embedded in a menu-driven X-windows user interface to the ACIS 3D Toolkit. (author)

  17. Feature recognition and detection for ancient architecture based on machine vision

    Science.gov (United States)

    Zou, Zheng; Wang, Niannian; Zhao, Peng; Zhao, Xuefeng

    2018-03-01

    Ancient architecture has a very high historical and artistic value. The ancient buildings have a wide variety of textures and decorative paintings, which contain a lot of historical meaning. Therefore, the research and statistics work of these different compositional and decorative features play an important role in the subsequent research. However, until recently, the statistics of those components are mainly by artificial method, which consumes a lot of labor and time, inefficiently. At present, as the strong support of big data and GPU accelerated training, machine vision with deep learning as the core has been rapidly developed and widely used in many fields. This paper proposes an idea to recognize and detect the textures, decorations and other features of ancient building based on machine vision. First, classify a large number of surface textures images of ancient building components manually as a set of samples. Then, using the convolution neural network to train the samples in order to get a classification detector. Finally verify its precision.

  18. Acoustic Longitudinal Field NIF Optic Feature Detection Map Using Time-Reversal & MUSIC

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S K

    2006-02-09

    We developed an ultrasonic longitudinal field time-reversal and MUltiple SIgnal Classification (MUSIC) based detection algorithm for identifying and mapping flaws in fused silica NIF optics. The algorithm requires a fully multistatic data set, that is one with multiple, independently operated, spatially diverse transducers, each transmitter of which, in succession, launches a pulse into the optic and the scattered signal measured and recorded at every receiver. We have successfully localized engineered ''defects'' larger than 1 mm in an optic. We confirmed detection and localization of 3 mm and 5 mm features in experimental data, and a 0.5 mm in simulated data with sufficiently high signal-to-noise ratio. We present the theory, experimental results, and simulated results.

  19. Automated detection of heart ailments from 12-lead ECG using complex wavelet sub-band bi-spectrum features.

    Science.gov (United States)

    Tripathy, Rajesh Kumar; Dandapat, Samarendra

    2017-04-01

    The complex wavelet sub-band bi-spectrum (CWSB) features are proposed for detection and classification of myocardial infarction (MI), heart muscle disease (HMD) and bundle branch block (BBB) from 12-lead ECG. The dual tree CW transform of 12-lead ECG produces CW coefficients at different sub-bands. The higher-order CW analysis is used for evaluation of CWSB. The mean of the absolute value of CWSB, and the number of negative phase angle and the number of positive phase angle features from the phase of CWSB of 12-lead ECG are evaluated. Extreme learning machine and support vector machine (SVM) classifiers are used to evaluate the performance of CWSB features. Experimental results show that the proposed CWSB features of 12-lead ECG and the SVM classifier are successful for classification of various heart pathologies. The individual accuracy values for MI, HMD and BBB classes are obtained as 98.37, 97.39 and 96.40%, respectively, using SVM classifier and radial basis function kernel function. A comparison has also been made with existing 12-lead ECG-based cardiac disease detection techniques.

  20. AFSC/RACE/GAP/Zimmermann: Cook Inlet Bathymetry Features

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — We assembled 1.4 million National Ocean Service (NOS) bathymetric soundings from 98 lead-line and single-beam echosounder hydrographic surveys conducted from 1910 to...

  1. Color-based scale-invariant feature detection applied in robot vision

    Science.gov (United States)

    Gao, Jian; Huang, Xinhan; Peng, Gang; Wang, Min; Li, Xinde

    2007-11-01

    The scale-invariant feature detecting methods always require a lot of computation yet sometimes still fail to meet the real-time demands in robot vision fields. To solve the problem, a quick method for detecting interest points is presented. To decrease the computation time, the detector selects as interest points those whose scale normalized Laplacian values are the local extrema in the nonholonomic pyramid scale space. The descriptor is built with several subregions, whose width is proportional to the scale factor, and the coordinates of the descriptor are rotated in relation to the interest point orientation just like the SIFT descriptor. The eigenvector is computed in the original color image and the mean values of the normalized color g and b in each subregion are chosen to be the factors of the eigenvector. Compared with the SIFT descriptor, this descriptor's dimension has been reduced evidently, which can simplify the point matching process. The performance of the method is analyzed in theory in this paper and the experimental results have certified its validity too.

  2. Automated Solar Flare Detection and Feature Extraction in High-Resolution and Full-Disk Hα Images

    Science.gov (United States)

    Yang, Meng; Tian, Yu; Liu, Yangyi; Rao, Changhui

    2018-05-01

    In this article, an automated solar flare detection method applied to both full-disk and local high-resolution Hα images is proposed. An adaptive gray threshold and an area threshold are used to segment the flare region. Features of each detected flare event are extracted, e.g. the start, peak, and end time, the importance class, and the brightness class. Experimental results have verified that the proposed method can obtain more stable and accurate segmentation results than previous works on full-disk images from Big Bear Solar Observatory (BBSO) and Kanzelhöhe Observatory for Solar and Environmental Research (KSO), and satisfying segmentation results on high-resolution images from the Goode Solar Telescope (GST). Moreover, the extracted flare features correlate well with the data given by KSO. The method may be able to implement a more complicated statistical analysis of Hα solar flares.

  3. Aircraft Detection from VHR Images Based on Circle-Frequency Filter and Multilevel Features

    Directory of Open Access Journals (Sweden)

    Feng Gao

    2013-01-01

    Full Text Available Aircraft automatic detection from very high-resolution (VHR images plays an important role in a wide variety of applications. This paper proposes a novel detector for aircraft detection from very high-resolution (VHR remote sensing images. To accurately distinguish aircrafts from background, a circle-frequency filter (CF-filter is used to extract the candidate locations of aircrafts from a large size image. A multi-level feature model is then employed to represent both local appearance and spatial layout of aircrafts by means of Robust Hue Descriptor and Histogram of Oriented Gradients. The experimental results demonstrate the superior performance of the proposed method.

  4. Using cell nuclei features to detect colon cancer tissue in hematoxylin and eosin stained slides.

    Science.gov (United States)

    Jørgensen, Alex Skovsbo; Rasmussen, Anders Munk; Andersen, Niels Kristian Mäkinen; Andersen, Simon Kragh; Emborg, Jonas; Røge, Rasmus; Østergaard, Lasse Riis

    2017-08-01

    Currently, diagnosis of colon cancer is based on manual examination of histopathological images by a pathologist. This can be time consuming and interpretation of the images is subject to inter- and intra-observer variability. This may be improved by introducing a computer-aided diagnosis (CAD) system for automatic detection of cancer tissue within whole slide hematoxylin and eosin (H&E) stains. Cancer disrupts the normal control mechanisms of cell proliferation and differentiation, affecting the structure and appearance of the cells. Therefore, extracting features from segmented cell nuclei structures may provide useful information to detect cancer tissue. A framework for automatic classification of regions of interest (ROI) containing either benign or cancerous colon tissue extracted from whole slide H&E stained images using cell nuclei features was proposed. A total of 1,596 ROI's were extracted from 87 whole slide H&E stains (44 benign and 43 cancer). A cell nuclei segmentation algorithm consisting of color deconvolution, k-means clustering, local adaptive thresholding, and cell separation was performed within the ROI's to extract cell nuclei features. From the segmented cell nuclei structures a total of 750 texture and intensity-based features were extracted for classification of the ROI's. The nine most discriminative cell nuclei features were used in a random forest classifier to determine if the ROI's contained benign or cancer tissue. The ROI classification obtained an area under the curve (AUC) of 0.96, sensitivity of 0.88, specificity of 0.92, and accuracy of 0.91 using an optimized threshold. The developed framework showed promising results in using cell nuclei features to classify ROIs into containing benign or cancer tissue in H&E stained tissue samples. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  5. Support Vector Feature Selection for Early Detection of Anastomosis Leakage From Bag-of-Words in Electronic Health Records.

    Science.gov (United States)

    Soguero-Ruiz, Cristina; Hindberg, Kristian; Rojo-Alvarez, Jose Luis; Skrovseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert

    2016-09-01

    The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.

  6. Airborne electromagnetic detection of shallow seafloor topographic features, including resolution of multiple sub-parallel seafloor ridges

    Science.gov (United States)

    Vrbancich, Julian; Boyd, Graham

    2014-05-01

    The HoistEM helicopter time-domain electromagnetic (TEM) system was flown over waters in Backstairs Passage, South Australia, in 2003 to test the bathymetric accuracy and hence the ability to resolve seafloor structure in shallow and deeper waters (extending to ~40 m depth) that contain interesting seafloor topography. The topography that forms a rock peak (South Page) in the form of a mini-seamount that barely rises above the water surface was accurately delineated along its ridge from the start of its base (where the seafloor is relatively flat) in ~30 m water depth to its peak at the water surface, after an empirical correction was applied to the data to account for imperfect system calibration, consistent with earlier studies using the same HoistEM system. A much smaller submerged feature (Threshold Bank) of ~9 m peak height located in waters of 35 to 40 m depth was also accurately delineated. These observations when checked against known water depths in these two regions showed that the airborne TEM system, following empirical data correction, was effectively operating correctly. The third and most important component of the survey was flown over the Yatala Shoals region that includes a series of sub-parallel seafloor ridges (resembling large sandwaves rising up to ~20 m from the seafloor) that branch out and gradually decrease in height as the ridges spread out across the seafloor. These sub-parallel ridges provide an interesting topography because the interpreted water depths obtained from 1D inversion of TEM data highlight the limitations of the EM footprint size in resolving both the separation between the ridges (which vary up to ~300 m) and the height of individual ridges (which vary up to ~20 m), and possibly also the limitations of assuming a 1D model in areas where the topography is quasi-2D/3D.

  7. EOG feature relevance determination for microsleep detection

    Directory of Open Access Journals (Sweden)

    Golz Martin

    2017-09-01

    Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 – 4.9 % and 1.9 – 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 – 0.006 % and 0.002 – 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respectively. GRLVQ permits objective feature reduction by inclusion of all processing stages, but is not as accurate as SVM.

  8. EOG feature relevance determination for microsleep detection

    Directory of Open Access Journals (Sweden)

    Golz Martin

    2017-09-01

    Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 - 4.9 % and 1.9 - 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 - 0.006 % and 0.002 - 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respec-tively. GRLVQ permits objective feature reduction by inclu-sion of all processing stages, but is not as accurate as SVM.

  9. A HYBRID FILTER AND WRAPPER FEATURE SELECTION APPROACH FOR DETECTING CONTAMINATION IN DRINKING WATER MANAGEMENT SYSTEM

    Directory of Open Access Journals (Sweden)

    S. VISALAKSHI

    2017-07-01

    Full Text Available Feature selection is an important task in predictive models which helps to identify the irrelevant features in the high - dimensional dataset. In this case of water contamination detection dataset, the standard wrapper algorithm alone cannot be applied because of the complexity. To overcome this computational complexity problem and making it lighter, filter-wrapper based algorithm has been proposed. In this work, reducing the feature space is a significant component of water contamination. The main findings are as follows: (1 The main goal is speeding up the feature selection process, so the proposed filter - based feature pre-selection is applied and guarantees that useful data are improbable to be detached in the initial stage which discussed briefly in this paper. (2 The resulting features are again filtered by using the Genetic Algorithm coded with Support Vector Machine method, where it facilitates to nutshell the subset of features with high accuracy and decreases the expense. Experimental results show that the proposed methods trim down redundant features effectively and achieved better classification accuracy.

  10. An image-processing method to detect sub-optical features based on understanding noise in intensity measurements.

    Science.gov (United States)

    Bhatia, Tripta

    2018-02-01

    Accurate quantitative analysis of image data requires that we distinguish between fluorescence intensity (true signal) and the noise inherent to its measurements to the extent possible. We image multilamellar membrane tubes and beads that grow from defects in the fluid lamellar phase of the lipid 1,2-dioleoyl-sn-glycero-3-phosphocholine dissolved in water and water-glycerol mixtures by using fluorescence confocal polarizing microscope. We quantify image noise and determine the noise statistics. Understanding the nature of image noise also helps in optimizing image processing to detect sub-optical features, which would otherwise remain hidden. We use an image-processing technique "optimum smoothening" to improve the signal-to-noise ratio of features of interest without smearing their structural details. A high SNR renders desired positional accuracy with which it is possible to resolve features of interest with width below optical resolution. Using optimum smoothening, the smallest and the largest core diameter detected is of width [Formula: see text] and [Formula: see text] nm, respectively, discussed in this paper. The image-processing and analysis techniques and the noise modeling discussed in this paper can be used for detailed morphological analysis of features down to sub-optical length scales that are obtained by any kind of fluorescence intensity imaging in the raster mode.

  11. A robust indicator based on singular value decomposition for flaw feature detection from noisy ultrasonic signals

    Science.gov (United States)

    Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang

    2018-05-01

    Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.

  12. Breast Cancer Detection with Reduced Feature Set

    Directory of Open Access Journals (Sweden)

    Ahmet Mert

    2015-01-01

    Full Text Available This paper explores feature reduction properties of independent component analysis (ICA on breast cancer decision support system. Wisconsin diagnostic breast cancer (WDBC dataset is reduced to one-dimensional feature vector computing an independent component (IC. The original data with 30 features and reduced one feature (IC are used to evaluate diagnostic accuracy of the classifiers such as k-nearest neighbor (k-NN, artificial neural network (ANN, radial basis function neural network (RBFNN, and support vector machine (SVM. The comparison of the proposed classification using the IC with original feature set is also tested on different validation (5/10-fold cross-validations and partitioning (20%–40% methods. These classifiers are evaluated how to effectively categorize tumors as benign and malignant in terms of specificity, sensitivity, accuracy, F-score, Youden’s index, discriminant power, and the receiver operating characteristic (ROC curve with its criterion values including area under curve (AUC and 95% confidential interval (CI. This represents an improvement in diagnostic decision support system, while reducing computational complexity.

  13. Tumor detection using feature extraction

    International Nuclear Information System (INIS)

    Sankar, A.S.; Amudhavalli, N.; Sivakolundu, M.K.

    2008-01-01

    The assistance system for brain tumor detection helps the doctor to analyse the brain tumor in MRI image and help to make decision. The manual detection system takes 3 -5 hours time to analyse the tumor. Doctors are in a position to analyze the tumor faster and make a correct decision with an assistance system

  14. Detection of microsleep events in a car driving simulation study using electrocardiographic features

    Directory of Open Access Journals (Sweden)

    Lenis Gustavo

    2016-09-01

    Full Text Available Microsleep events (MSE are short intrusions of sleep under the demand of sustained attention. They can impose a major threat to safety while driving a car and are considered one of the most significant causes of traffic accidents. Driver’s fatigue and MSE account for up to 20% of all car crashes in Europe and at least 100,000 accidents in the US every year. Unfortunately, there is not a standardized test developed to quantify the degree of vigilance of a driver. To account for this problem, different approaches based on biosignal analysis have been studied in the past. In this paper, we investigate an electrocardiographic-based detection of MSE using morphological and rhythmical features. 14 records from a car driving simulation study with a high incidence of MSE were analyzed and the behavior of the ECG features before and after an MSE in relation to reference baseline values (without drowsiness were investigated. The results show that MSE cannot be detected (or predicted using only the ECG. However, in the presence of MSE, the rhythmical and morphological features were observed to be significantly different than the ones calculated for the reference signal without sleepiness. In particular, when MSE were present, the heart rate diminished while the heart rate variability increased. Time distances between P wave and R peak, and R peak and T wave and their dispersion increased also. This demonstrates a noticeable change of the autonomous regulation of the heart. In future, the ECG parameter could be used as a surrogate measure of fatigue.

  15. Spinal focal lesion detection in multiple myeloma using multimodal image features

    Science.gov (United States)

    Fränzle, Andrea; Hillengass, Jens; Bendl, Rolf

    2015-03-01

    Multiple myeloma is a tumor disease in the bone marrow that affects the skeleton systemically, i.e. multiple lesions can occur in different sites in the skeleton. To quantify overall tumor mass for determining degree of disease and for analysis of therapy response, volumetry of all lesions is needed. Since the large amount of lesions in one patient impedes manual segmentation of all lesions, quantification of overall tumor volume is not possible until now. Therefore development of automatic lesion detection and segmentation methods is necessary. Since focal tumors in multiple myeloma show different characteristics in different modalities (changes in bone structure in CT images, hypointensity in T1 weighted MR images and hyperintensity in T2 weighted MR images), multimodal image analysis is necessary for the detection of focal tumors. In this paper a pattern recognition approach is presented that identifies focal lesions in lumbar vertebrae based on features from T1 and T2 weighted MR images. Image voxels within bone are classified using random forests based on plain intensities and intensity value derived features (maximum, minimum, mean, median) in a 5 x 5 neighborhood around a voxel from both T1 and T2 weighted MR images. A test data sample of lesions in 8 lumbar vertebrae from 4 multiple myeloma patients can be classified at an accuracy of 95% (using a leave-one-patient-out test). The approach provides a reasonable delineation of the example lesions. This is an important step towards automatic tumor volume quantification in multiple myeloma.

  16. Near-Duplicate Web Page Detection: An Efficient Approach Using Clustering, Sentence Feature and Fingerprinting

    Directory of Open Access Journals (Sweden)

    J. Prasanna Kumar

    2013-02-01

    Full Text Available Duplicate and near-duplicate web pages are the chief concerns for web search engines. In reality, they incur enormous space to store the indexes, ultimately slowing down and increasing the cost of serving results. A variety of techniques have been developed to identify pairs of web pages that are aldquo;similarardquo; to each other. The problem of finding near-duplicate web pages has been a subject of research in the database and web-search communities for some years. In order to identify the near duplicate web pages, we make use of sentence level features along with fingerprinting method. When a large number of web documents are in consideration for the detection of web pages, then at first, we use K-mode clustering and subsequently sentence feature and fingerprint comparison is used. Using these steps, we exactly identify the near duplicate web pages in an efficient manner. The experimentation is carried out on the web page collections and the results ensured the efficiency of the proposed approach in detecting the near duplicate web pages.

  17. Ship detection in South African oceans using SAR, CFAR and a Haar-like feature classifier

    CSIR Research Space (South Africa)

    Schwegmann, CP

    2014-07-01

    Full Text Available -1 2014 IEEE Internatonal Geoscience and Remote Sensing Symposium (IGARSS), Quebec Canada, 13-18 July 2014 SHIP DETECTION IN SOUTH AFRICAN OCEANS USING SAR, CFAR AND A HAAR-LIKE FEATURE CLASSIFIER yzC. P. Schwegmann,yzW. Kleynhans,?zB. P. Salmon...

  18. Fukunaga-Koontz feature transformation for statistical structural damage detection and hierarchical neuro-fuzzy damage localisation

    Science.gov (United States)

    Hoell, Simon; Omenzetter, Piotr

    2017-07-01

    Considering jointly damage sensitive features (DSFs) of signals recorded by multiple sensors, applying advanced transformations to these DSFs and assessing systematically their contribution to damage detectability and localisation can significantly enhance the performance of structural health monitoring systems. This philosophy is explored here for partial autocorrelation coefficients (PACCs) of acceleration responses. They are interrogated with the help of the linear discriminant analysis based on the Fukunaga-Koontz transformation using datasets of the healthy and selected reference damage states. Then, a simple but efficient fast forward selection procedure is applied to rank the DSF components with respect to statistical distance measures specialised for either damage detection or localisation. For the damage detection task, the optimal feature subsets are identified based on the statistical hypothesis testing. For damage localisation, a hierarchical neuro-fuzzy tool is developed that uses the DSF ranking to establish its own optimal architecture. The proposed approaches are evaluated experimentally on data from non-destructively simulated damage in a laboratory scale wind turbine blade. The results support our claim of being able to enhance damage detectability and localisation performance by transforming and optimally selecting DSFs. It is demonstrated that the optimally selected PACCs from multiple sensors or their Fukunaga-Koontz transformed versions can not only improve the detectability of damage via statistical hypothesis testing but also increase the accuracy of damage localisation when used as inputs into a hierarchical neuro-fuzzy network. Furthermore, the computational effort of employing these advanced soft computing models for damage localisation can be significantly reduced by using transformed DSFs.

  19. Online feature selection with streaming features.

    Science.gov (United States)

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  20. Statistical and Spatial Analysis of Bathymetric Data for the St. Clair River, 1971-2007

    Science.gov (United States)

    Bennion, David

    2009-01-01

    To address questions concerning ongoing geomorphic processes in the St. Clair River, selected bathymetric datasets spanning 36 years were analyzed. Comparisons of recent high-resolution datasets covering the upper river indicate a highly variable, active environment. Although statistical and spatial comparisons of the datasets show that some changes to the channel size and shape have taken place during the study period, uncertainty associated with various survey methods and interpolation processes limit the statistically certain results. The methods used to spatially compare the datasets are sensitive to small variations in position and depth that are within the range of uncertainty associated with the datasets. Characteristics of the data, such as the density of measured points and the range of values surveyed, can also influence the results of spatial comparison. With due consideration of these limitations, apparently active and ongoing areas of elevation change in the river are mapped and discussed.

  1. Cascade detection for the extraction of localized sequence features; specificity results for HIV-1 protease and structure-function results for the Schellman loop.

    Science.gov (United States)

    Newell, Nicholas E

    2011-12-15

    The extraction of the set of features most relevant to function from classified biological sequence sets is still a challenging problem. A central issue is the determination of expected counts for higher order features so that artifact features may be screened. Cascade detection (CD), a new algorithm for the extraction of localized features from sequence sets, is introduced. CD is a natural extension of the proportional modeling techniques used in contingency table analysis into the domain of feature detection. The algorithm is successfully tested on synthetic data and then applied to feature detection problems from two different domains to demonstrate its broad utility. An analysis of HIV-1 protease specificity reveals patterns of strong first-order features that group hydrophobic residues by side chain geometry and exhibit substantial symmetry about the cleavage site. Higher order results suggest that favorable cooperativity is weak by comparison and broadly distributed, but indicate possible synergies between negative charge and hydrophobicity in the substrate. Structure-function results for the Schellman loop, a helix-capping motif in proteins, contain strong first-order features and also show statistically significant cooperativities that provide new insights into the design of the motif. These include a new 'hydrophobic staple' and multiple amphipathic and electrostatic pair features. CD should prove useful not only for sequence analysis, but also for the detection of multifactor synergies in cross-classified data from clinical studies or other sources. Windows XP/7 application and data files available at: https://sites.google.com/site/cascadedetect/home. nacnewell@comcast.net Supplementary information is available at Bioinformatics online.

  2. A new approach for detecting local features

    DEFF Research Database (Denmark)

    Nguyen, Phuong Giang; Andersen, Hans Jørgen

    2010-01-01

    Local features up to now are often mentioned in the meaning of interest points. A patch around each point is formed to compute descriptors or feature vectors. Therefore, in order to satisfy different invariant imaging conditions such as scales and viewpoints, an input image is often represented i...

  3. More than a century of bathymetric observations and present-day shallow sediment characterization in Belfast Bay, Maine, USA: implications for pockmark field longevity

    Science.gov (United States)

    Brothers, Laura L.; Kelley, Joseph T.; Belknap, Daniel F.; Barnhardt, Walter A.; Andrews, Brian D.; Maynard, Melissa Landon

    2011-08-01

    Mechanisms and timescales responsible for pockmark formation and maintenance remain uncertain, especially in areas lacking extensive thermogenic fluid deposits (e.g., previously glaciated estuaries). This study characterizes seafloor activity in the Belfast Bay, Maine nearshore pockmark field using (1) three swath bathymetry datasets collected between 1999 and 2008, complemented by analyses of shallow box-core samples for radionuclide activity and undrained shear strength, and (2) historical bathymetric data (report and smooth sheets from 1872, 1947, 1948). In addition, because repeat swath bathymetry surveys are an emerging data source, we present a selected literature review of recent studies using such datasets for seafloor change analysis. This study is the first to apply the method to a pockmark field, and characterizes macro-scale (>5 m) evolution of tens of square kilometers of highly irregular seafloor. Presence/absence analysis yielded no change in pockmark frequency or distribution over a 9-year period (1999-2008). In that time pockmarks did not detectably enlarge, truncate, elongate, or combine. Historical data indicate that pockmark chains already existed in the 19th century. Despite the lack of macroscopic changes in the field, near-bed undrained shear-strength values of less than 7 kPa and scattered downcore 137Cs signatures indicate a highly disturbed setting. Integrating these findings with independent geophysical and geochemical observations made in the pockmark field, it can be concluded that (1) large-scale sediment resuspension and dispersion related to pockmark formation and failure do not occur frequently within this field, and (2) pockmarks can persevere in a dynamic estuarine setting that exhibits minimal modern fluid venting. Although pockmarks are conventionally thought to be long-lived features maintained by a combination of fluid venting and minimal sediment accumulation, this suggests that other mechanisms may be equally active in

  4. Topographic attributes as a guide for automated detection or highlighting of geological features

    Science.gov (United States)

    Viseur, Sophie; Le Men, Thibaud; Guglielmi, Yves

    2015-04-01

    Photogrammetry or LIDAR technology combined with photography allow geoscientists to obtain 3D high-resolution numerical representations of outcrops, generally termed as Digital Outcrop Models (DOM). For over a decade, these 3D numerical outcrops serve as support for precise and accurate interpretations of geological features such as fracture traces or plans, strata, facies mapping, etc. These interpretations have the benefit to be directly georeferenced and embedded into the 3D space. They are then easily integrated into GIS or geomodeler softwares for modelling in 3D the subsurface geological structures. However, numerical outcrops generally represent huge data sets that are heavy to manipulate and hence to interpret. This may be particularly tedious as soon as several scales of geological features must be investigated or as geological features are very dense and imbricated. Automated tools for interpreting geological features from DOMs would be then a significant help to process these kinds of data. Such technologies are commonly used for interpreting seismic or medical data. However, it may be noticed that even if many efforts have been devoted to easily and accurately acquire 3D topographic point clouds and photos and to visualize accurate 3D textured DOMs, few attentions have been paid to the development of algorithms for automated detection of the geological structures from DOMs. The automatic detection of objects on numerical data generally assumes that signals or attributes computed from this data allows the recognition of the targeted object boundaries. The first step consists then in defining attributes that highlight the objects or their boundaries. For DOM interpretations, some authors proposed to use differential operators computed on the surface such as normal or curvatures. These methods generally extract polylines corresponding to fracture traces or bed limits. Other approaches rely on the PCA technology to segregate different topographic plans

  5. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  6. High resolution bathymetric and sonar images of a ridge southeast of Terceira Island (Azores plateau)

    Science.gov (United States)

    Lourenço, N.; Miranda, J. M.; Luis, J.; Silva, I.; Goslin, J.; Ligi, M.

    2003-04-01

    The Terceira rift is a oblique ultra-slow spreading system where a transtensive regime results from differential movement between Eurasian and African plates. So far no classical ridge segmentation pattern has here been observed. The predominant morphological features are fault controlled rhombic shaped basins and volcanism related morphologies like circular seamounts and volcanic ridges. We present SIMRAD EM300 (bathymetry + backscatter) images acquired over one of these ridges located SE of Terceira Island, during the SIRENA cruise (PI J. Goslin), which complements previous TOBI mosaics performed over the same area during the AZZORRE99 cruise (PI M. Ligi). The ridge presents a NW-SE orientation, it is seismically active (a seismic crisis was documented in 1997) and corresponds to the southern branch of a V shape bathymetric feature enclosing the Terceira Island and which tip is located west of the Island near the 1998 Serreta ridge eruption site. NE of the ridge, the core of the V, corresponds to the North Hirondelle basin. All this area corresponds mainly to Brunhes magnetic epoch. The new bathymetry maps reveal a partition between tectonic processes, centred in the ridge, and volcanism present at the bottom of the North Hirondelle basin. The ridge high backscatter surface is cut by a set of sub-parallel anastomosed normal faults striking between N130º and N150º. Some faults present horse-tail terminations. Fault splays sometimes link to neighbour faults defining extensional duplexes and fault wedge basins and highs of rhombic shape. The faulting geometry suggests that a left-lateral strike slip component should be present. The top of the ridge consists on an arched demi-.horst, and it is probably a volcanic structure remnant (caldera system?), existing prior to onset of the tectonic stage in the ridge. Both ridge flanks display gullies and mass wasting fans at the base of the slope. The ridge vicinities are almost exclusively composed of a grayish homogeneous

  7. Comparison of spatial frequency domain features for the detection of side attack explosive ballistics in synthetic aperture acoustics

    Science.gov (United States)

    Dowdy, Josh; Anderson, Derek T.; Luke, Robert H.; Ball, John E.; Keller, James M.; Havens, Timothy C.

    2016-05-01

    Explosive hazards in current and former conflict zones are a threat to both military and civilian personnel. As a result, much effort has been dedicated to identifying automated algorithms and systems to detect these threats. However, robust detection is complicated due to factors like the varied composition and anatomy of such hazards. In order to solve this challenge, a number of platforms (vehicle-based, handheld, etc.) and sensors (infrared, ground penetrating radar, acoustics, etc.) are being explored. In this article, we investigate the detection of side attack explosive ballistics via a vehicle-mounted acoustic sensor. In particular, we explore three acoustic features, one in the time domain and two on synthetic aperture acoustic (SAA) beamformed imagery. The idea is to exploit the varying acoustic frequency profile of a target due to its unique geometry and material composition with respect to different viewing angles. The first two features build their angle specific frequency information using a highly constrained subset of the signal data and the last feature builds its frequency profile using all available signal data for a given region of interest (centered on the candidate target location). Performance is assessed in the context of receiver operating characteristic (ROC) curves on cross-validation experiments for data collected at a U.S. Army test site on different days with multiple target types and clutter. Our preliminary results are encouraging and indicate that the top performing feature is the unrolled two dimensional discrete Fourier transform (DFT) of SAA beamformed imagery.

  8. Remote measurement of river discharge using thermal particle image velocimetry (PIV) and various sources of bathymetric information

    Science.gov (United States)

    Legleiter, Carl; Kinzel, Paul J.; Nelson, Jonathan M.

    2017-01-01

    Although river discharge is a fundamental hydrologic quantity, conventional methods of streamgaging are impractical, expensive, and potentially dangerous in remote locations. This study evaluated the potential for measuring discharge via various forms of remote sensing, primarily thermal imaging of flow velocities but also spectrally-based depth retrieval from passive optical image data. We acquired thermal image time series from bridges spanning five streams in Alaska and observed strong agreement between velocities measured in situ and those inferred by Particle Image Velocimetry (PIV), which quantified advection of thermal features by the flow. The resulting surface velocities were converted to depth-averaged velocities by applying site-specific, calibrated velocity indices. Field spectra from three clear-flowing streams provided strong relationships between depth and reflectance, suggesting that, under favorable conditions, spectrally-based bathymetric mapping could complement thermal PIV in a hybrid approach to remote sensing of river discharge; this strategy would not be applicable to larger, more turbid rivers, however. A more flexible and efficient alternative might involve inferring depth from thermal data based on relationships between depth and integral length scales of turbulent fluctuations in temperature, captured as variations in image brightness. We observed moderately strong correlations for a site-aggregated data set that reduced station-to-station variability but encompassed a broad range of depths. Discharges calculated using thermal PIV-derived velocities were within 15% of in situ measurements when combined with depths measured directly in the field or estimated from field spectra and within 40% when the depth information also was derived from thermal images. The results of this initial, proof-of-concept investigation suggest that remote sensing techniques could facilitate measurement of river discharge.

  9. An Ensemble Method with Integration of Feature Selection and Classifier Selection to Detect the Landslides

    Science.gov (United States)

    Zhongqin, G.; Chen, Y.

    2017-12-01

    Abstract Quickly identify the spatial distribution of landslides automatically is essential for the prevention, mitigation and assessment of the landslide hazard. It's still a challenging job owing to the complicated characteristics and vague boundary of the landslide areas on the image. The high resolution remote sensing image has multi-scales, complex spatial distribution and abundant features, the object-oriented image classification methods can make full use of the above information and thus effectively detect the landslides after the hazard happened. In this research we present a new semi-supervised workflow, taking advantages of recent object-oriented image analysis and machine learning algorithms to quick locate the different origins of landslides of some areas on the southwest part of China. Besides a sequence of image segmentation, feature selection, object classification and error test, this workflow ensemble the feature selection and classifier selection. The feature this study utilized were normalized difference vegetation index (NDVI) change, textural feature derived from the gray level co-occurrence matrices (GLCM), spectral feature and etc. The improvement of this study shows this algorithm significantly removes some redundant feature and the classifiers get fully used. All these improvements lead to a higher accuracy on the determination of the shape of landslides on the high resolution remote sensing image, in particular the flexibility aimed at different kinds of landslides.

  10. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-02-01

    Full Text Available Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples. Therefore, a presentation attack detection (PAD method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP, local ternary pattern (LTP, and histogram of oriented gradients (HOG. As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN method to extract deep image features and the multi-level local binary pattern (MLBP method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  11. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  12. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  13. Feature Extraction For Application of Heart Abnormalities Detection Through Iris Based on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Entin Martiana Kusumaningtyas

    2018-01-01

    Full Text Available As the WHO says, heart disease is the leading cause of death and examining it by current methods in hospitals is not cheap. Iridology is one of the most popular alternative ways to detect the condition of organs. Iridology is the science that enables a health practitioner or non-expert to study signs in the iris that are capable of showing abnormalities in the body, including basic genetics, toxin deposition, circulation of dams, and other weaknesses. Research on computer iridology has been done before. One is about the computer's iridology system to detect heart conditions. There are several stages such as capture eye base on target, pre-processing, cropping, segmentation, feature extraction and classification using Thresholding algorithms. In this study, feature extraction process performed using binarization method by transforming the image into black and white. In this process we compare the two approaches of binarization method, binarization based on grayscale images and binarization based on proximity. The system we proposed was tested at Mugi Barokah Clinic Surabaya.  We conclude that the image grayscale approach performs better classification than using proximity.

  14. A new feature detection mechanism and its application in secured ECG transmission with noise masking.

    Science.gov (United States)

    Sufi, Fahim; Khalil, Ibrahim

    2009-04-01

    With cardiovascular disease as the number one killer of modern era, Electrocardiogram (ECG) is collected, stored and transmitted in greater frequency than ever before. However, in reality, ECG is rarely transmitted and stored in a secured manner. Recent research shows that eavesdropper can reveal the identity and cardiovascular condition from an intercepted ECG. Therefore, ECG data must be anonymized before transmission over the network and also stored as such in medical repositories. To achieve this, first of all, this paper presents a new ECG feature detection mechanism, which was compared against existing cross correlation (CC) based template matching algorithms. Two types of CC methods were used for comparison. Compared to the CC based approaches, which had 40% and 53% misclassification rates, the proposed detection algorithm did not perform any single misclassification. Secondly, a new ECG obfuscation method was designed and implemented on 15 subjects using added noises corresponding to each of the ECG features. This obfuscated ECG can be freely distributed over the internet without the necessity of encryption, since the original features needed to identify personal information of the patient remain concealed. Only authorized personnel possessing a secret key will be able to reconstruct the original ECG from the obfuscated ECG. Distribution of the would appear as regular ECG without encryption. Therefore, traditional decryption techniques including powerful brute force attack are useless against this obfuscation.

  15. Network Traffic Features for Anomaly Detection in Specific Industrial Control System Network

    Directory of Open Access Journals (Sweden)

    Matti Mantere

    2013-09-01

    Full Text Available The deterministic and restricted nature of industrial control system networks sets them apart from more open networks, such as local area networks in office environments. This improves the usability of network security, monitoring approaches that would be less feasible in more open environments. One of such approaches is machine learning based anomaly detection. Without proper customization for the special requirements of the industrial control system network environment, many existing anomaly or misuse detection systems will perform sub-optimally. A machine learning based approach could reduce the amount of manual customization required for different industrial control system networks. In this paper we analyze a possible set of features to be used in a machine learning based anomaly detection system in the real world industrial control system network environment under investigation. The network under investigation is represented by architectural drawing and results derived from network trace analysis. The network trace is captured from a live running industrial process control network and includes both control data and the data flowing between the control network and the office network. We limit the investigation to the IP traffic in the traces.

  16. LAND COVER CHANGE DETECTION BASED ON GENETICALLY FEATURE AELECTION AND IMAGE ALGEBRA USING HYPERION HYPERSPECTRAL IMAGERY

    Directory of Open Access Journals (Sweden)

    S. T. Seydi

    2015-12-01

    Full Text Available The Earth has always been under the influence of population growth and human activities. This process causes the changes in land use. Thus, for optimal management of the use of resources, it is necessary to be aware of these changes. Satellite remote sensing has several advantages for monitoring land use/cover resources, especially for large geographic areas. Change detection and attribution of cultivation area over time present additional challenges for correctly analyzing remote sensing imagery. In this regards, for better identifying change in multi temporal images we use hyperspectral images. Hyperspectral images due to high spectral resolution created special placed in many of field. Nevertheless, selecting suitable and adequate features/bands from this data is crucial for any analysis and especially for the change detection algorithms. This research aims to automatically feature selection for detect land use changes are introduced. In this study, the optimal band images using hyperspectral sensor using Hyperion hyperspectral images by using genetic algorithms and Ratio bands, we select the optimal band. In addition, the results reveal the superiority of the implemented method to extract change map with overall accuracy by a margin of nearly 79% using multi temporal hyperspectral imagery.

  17. AFSC/RACE/GAP/Zimmermann: Central Gulf of Alaska Bathymetry Features

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — We assembled 1.75 million National Ocean Service (NOS) bathymetric soundings from 225 lead-line and single-beam echosounder hydrographic surveys conducted from 1901...

  18. Effective dysphonia detection using feature dimension reduction and kernel density estimation for patients with Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Shanshan Yang

    Full Text Available Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD, and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS and kernel principal component analysis (KPCA methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP decision rule and support vector machine (SVM with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified.

  19. Root Exploit Detection and Features Optimization: Mobile Device and Blockchain Based Medical Data Management.

    Science.gov (United States)

    Firdaus, Ahmad; Anuar, Nor Badrul; Razak, Mohd Faizal Ab; Hashem, Ibrahim Abaker Targio; Bachok, Syafiq; Sangaiah, Arun Kumar

    2018-05-04

    The increasing demand for Android mobile devices and blockchain has motivated malware creators to develop mobile malware to compromise the blockchain. Although the blockchain is secure, attackers have managed to gain access into the blockchain as legal users, thereby comprising important and crucial information. Examples of mobile malware include root exploit, botnets, and Trojans and root exploit is one of the most dangerous malware. It compromises the operating system kernel in order to gain root privileges which are then used by attackers to bypass the security mechanisms, to gain complete control of the operating system, to install other possible types of malware to the devices, and finally, to steal victims' private keys linked to the blockchain. For the purpose of maximizing the security of the blockchain-based medical data management (BMDM), it is crucial to investigate the novel features and approaches contained in root exploit malware. This study proposes to use the bio-inspired method of practical swarm optimization (PSO) which automatically select the exclusive features that contain the novel android debug bridge (ADB). This study also adopts boosting (adaboost, realadaboost, logitboost, and multiboost) to enhance the machine learning prediction that detects unknown root exploit, and scrutinized three categories of features including (1) system command, (2) directory path and (3) code-based. The evaluation gathered from this study suggests a marked accuracy value of 93% with Logitboost in the simulation. Logitboost also helped to predicted all the root exploit samples in our developed system, the root exploit detection system (RODS).

  20. Feature-space assessment of electrical impedance tomography coregistered with computed tomography in detecting multiple contrast targets

    International Nuclear Information System (INIS)

    Krishnan, Kalpagam; Liu, Jeff; Kohli, Kirpal

    2014-01-01

    Purpose: Fusion of electrical impedance tomography (EIT) with computed tomography (CT) can be useful as a clinical tool for providing additional physiological information about tissues, but requires suitable fusion algorithms and validation procedures. This work explores the feasibility of fusing EIT and CT images using an algorithm for coregistration. The imaging performance is validated through feature space assessment on phantom contrast targets. Methods: EIT data were acquired by scanning a phantom using a circuit, configured for injecting current through 16 electrodes, placed around the phantom. A conductivity image of the phantom was obtained from the data using electrical impedance and diffuse optical tomography reconstruction software (EIDORS). A CT image of the phantom was also acquired. The EIT and CT images were fused using a region of interest (ROI) coregistration fusion algorithm. Phantom imaging experiments were carried out on objects of different contrasts, sizes, and positions. The conductive medium of the phantoms was made of a tissue-mimicking bolus material that is routinely used in clinical radiation therapy settings. To validate the imaging performance in detecting different contrasts, the ROI of the phantom was filled with distilled water and normal saline. Spatially separated cylindrical objects of different sizes were used for validating the imaging performance in multiple target detection. Analyses of the CT, EIT and the EIT/CT phantom images were carried out based on the variations of contrast, correlation, energy, and homogeneity, using a gray level co-occurrence matrix (GLCM). A reference image of the phantom was simulated using EIDORS, and the performances of the CT and EIT imaging systems were evaluated and compared against the performance of the EIT/CT system using various feature metrics, detectability, and structural similarity index measures. Results: In detecting distilled and normal saline water in bolus medium, EIT as a stand

  1. Automatic detection and classification of breast tumors in ultrasonic images using texture and morphological features.

    Science.gov (United States)

    Su, Yanni; Wang, Yuanyuan; Jiao, Jing; Guo, Yi

    2011-01-01

    Due to severe presence of speckle noise, poor image contrast and irregular lesion shape, it is challenging to build a fully automatic detection and classification system for breast ultrasonic images. In this paper, a novel and effective computer-aided method including generation of a region of interest (ROI), segmentation and classification of breast tumor is proposed without any manual intervention. By incorporating local features of texture and position, a ROI is firstly detected using a self-organizing map neural network. Then a modified Normalized Cut approach considering the weighted neighborhood gray values is proposed to partition the ROI into clusters and get the initial boundary. In addition, a regional-fitting active contour model is used to adjust the few inaccurate initial boundaries for the final segmentation. Finally, three textures and five morphologic features are extracted from each breast tumor; whereby a highly efficient Affinity Propagation clustering is used to fulfill the malignancy and benign classification for an existing database without any training process. The proposed system is validated by 132 cases (67 benignancies and 65 malignancies) with its performance compared to traditional methods such as level set segmentation, artificial neural network classifiers, and so forth. Experiment results show that the proposed system, which needs no training procedure or manual interference, performs best in detection and classification of ultrasonic breast tumors, while having the lowest computation complexity.

  2. Feature-Based Retinal Image Registration Using D-Saddle Feature

    Directory of Open Access Journals (Sweden)

    Roziana Ramli

    2017-01-01

    Full Text Available Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%, Harris-PIIFD (4%, H-M (16%, and Saddle (16%. Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle.

  3. Simulation study and guidelines to generate Laser-induced Surface Acoustic Waves for human skin feature detection

    Science.gov (United States)

    Li, Tingting; Fu, Xing; Chen, Kun; Dorantes-Gonzalez, Dante J.; Li, Yanning; Wu, Sen; Hu, Xiaotang

    2015-12-01

    Despite the seriously increasing number of people contracting skin cancer every year, limited attention has been given to the investigation of human skin tissues. To this regard, Laser-induced Surface Acoustic Wave (LSAW) technology, with its accurate, non-invasive and rapid testing characteristics, has recently shown promising results in biological and biomedical tissues. In order to improve the measurement accuracy and efficiency of detecting important features in highly opaque and soft surfaces such as human skin, this paper identifies the most important parameters of a pulse laser source, as well as provides practical guidelines to recommended proper ranges to generate Surface Acoustic Waves (SAWs) for characterization purposes. Considering that melanoma is a serious type of skin cancer, we conducted a finite element simulation-based research on the generation and propagation of surface waves in human skin containing a melanoma-like feature, determine best pulse laser parameter ranges of variation, simulation mesh size and time step, working bandwidth, and minimal size of detectable melanoma.

  4. Shape based automated detection of pulmonary nodules with surface feature based false positive reduction

    International Nuclear Information System (INIS)

    Nomura, Y.; Itoh, H.; Masutani, Y.; Ohtomo, K.; Maeda, E.; Yoshikawa, T.; Hayashi, N.

    2007-01-01

    We proposed a shape based automated detection of pulmonary nodules with surface feature based false positive (FP) reduction. In the proposed system, the FP existing in internal of vessel bifurcation is removed using extracted surface of vessels and nodules. From the validation with 16 chest CT scans, we find that the proposed CAD system achieves 18.7 FPs/scan at 90% sensitivity, and 7.8 FPs/scan at 80% sensitivity. (orig.)

  5. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Lars J., E-mail: Lars.grimm@duke.edu; Ghate, Sujata V.; Yoon, Sora C.; Kim, Connie [Department of Radiology, Duke University Medical Center, Box 3808, Durham, North Carolina 27710 (United States); Kuzmiak, Cherie M. [Department of Radiology, University of North Carolina School of Medicine, 2006 Old Clinic, CB No. 7510, Chapel Hill, North Carolina 27599 (United States); Mazurowski, Maciej A. [Duke University Medical Center, Box 2731 Medical Center, Durham, North Carolina 27710 (United States)

    2014-03-15

    Purpose: The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Methods: Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Results: Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502–0.739, 95% Confidence Interval: 0.543–0.680,p < 0.002). Conclusions: Patterns in detection errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees.

  6. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features

    International Nuclear Information System (INIS)

    Grimm, Lars J.; Ghate, Sujata V.; Yoon, Sora C.; Kim, Connie; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-01-01

    Purpose: The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Methods: Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Results: Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502–0.739, 95% Confidence Interval: 0.543–0.680,p < 0.002). Conclusions: Patterns in detection errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees

  7. Evolutionary transitions in symbioses: dramatic reductions in bathymetric and geographic ranges of Zoanthidea coincide with loss of symbioses with invertebrates.

    Science.gov (United States)

    Swain, Timothy D

    2010-06-01

    Two fundamental symbiosis-based trophic types are recognized among Zoanthidea (Cnidaria, Anthozoa): fixed carbon is either obtained directly from zooxanthellae photosymbionts or from environmental sources through feeding with the assistance of host-invertebrate behaviour and structure. Each trophic type is characteristic of the suborders of Zoanthidea and is associated with substantial distributional asymmetries: suborder Macrocnemina are symbionts of invertebrates and have global geographic and bathymetric distributions and suborder Brachycnemina are hosts of endosymbiotic zooxanthellae and are restricted to tropical photic zones. While exposure to solar radiation could explain the bathymetric asymmetry it does not explain the geographic asymmetry, nor is it clear why evolutionary transitions to the zooxanthellae-free state have apparently occurred within Macrocnemina but not within Brachycnemina. To better understand the transitions between symbiosis-based trophic types of Zoanthidea, a concatenated data set of nuclear and mitochondrial nucleotide sequences were used to test hypotheses of monophyly for groups defined by morphology and symbiosis, and to reconstruct the evolutionary transitions of morphological and symbiotic characters. The results indicate that the morphological characters that define Macrocnemina are plesiomorphic and the characters that define its subordinate taxa are homoplasious. Symbioses with invertebrates have ancient and recent transitions with a general pattern of stability in host associations through evolutionary time. The reduction in distribution of Zoanthidea is independent of the evolution of zooxanthellae symbiosis and consistent with hypotheses of the benefits of invertebrate symbioses, indicating that the ability to persist in most habitats may have been lost with the termination of symbioses with invertebrates.

  8. Comparing whole slide digital images versus traditional glass slides in the detection of common microscopic features seen in dermatitis

    Directory of Open Access Journals (Sweden)

    Nikki S Vyas

    2016-01-01

    Full Text Available Background: The quality and limitations of digital slides are not fully known. We aimed to estimate intrapathologist discrepancy in detecting specific microscopic features on glass slides and digital slides created by scanning at ×20. Methods: Hematoxylin and eosin and periodic acid-Schiff glass slides were digitized using the Mirax Scan (Carl Zeiss Inc., Germany. Six pathologists assessed 50-71 digital slides. We recorded objective magnification, total time, and detection of the following: Mast cells; eosinophils; plasma cells; pigmented macrophages; melanin in the epidermis; fungal bodies; neutrophils; civatte bodies; parakeratosis; and sebocytes. This process was repeated using the corresponding glass slides after 3 weeks. The diagnosis was not required. Results: The mean time to assess digital slides was 176.77 s and 137.61 s for glass slides (P < 0.001, 99% confidence interval [CI]. The mean objective magnification used to detect features using digital slides was 18.28 and 14.07 for glass slides (P < 0.001, 99.99% CI. Parakeratosis, civatte bodies, pigmented macrophages, melanin in the epidermis, mast cells, eosinophils, plasma cells, and neutrophils, were identified at lower objectives on glass slides (P = 0.023-0.001, 95% CI. Average intraobserver concordance ranged from κ = 0.30 to κ = 0.78. Features with poor to fair average concordance were: Melanin in the epidermis (κ = 0.15-0.58; plasma cells (κ = 0.15-0.49; and neutrophils (κ = 0.12-0.48. Features with moderate average intrapathologist concordance were: parakeratosis (κ = 0.21-0.61; civatte bodies (κ = 0.21-0.71; pigment-laden macrophages (κ = 0.34-0.66; mast cells (κ = 0.29-0.78; and eosinophils (κ = 0.31-0.79. The average intrapathologist concordance was good for sebocytes (κ = 0.51-1.00 and fungal bodies (κ = 0.47-0.76. Conclusions: Telepathology using digital slides scanned at ×20 is sufficient for detection of histopathologic features routinely encountered in

  9. Application of IRS-1D data in water erosion features detection (case study: Nour roud catchment, Iran).

    Science.gov (United States)

    Solaimani, K; Amri, M A Hadian

    2008-08-01

    The aim of this study was capability of Indian Remote Sensing (IRS) data of 1D to detecting erosion features which were created from run-off. In this study, ability of PAN digital data of IRS-1D satellite was evaluated for extraction of erosion features in Nour-roud catchment located in Mazandaran province, Iran, using GIS techniques. Research method has based on supervised digital classification, using MLC algorithm and also visual interpretation, using PMU analysis and then these were evaluated and compared. Results indicated that opposite of digital classification, with overall accuracy 40.02% and kappa coefficient 31.35%, due to low spectral resolution; visual interpretation and classification, due to high spatial resolution (5.8 m), prepared classifying erosion features from this data, so that these features corresponded with the lithology, slope and hydrograph lines using GIS, so closely that one can consider their boundaries overlapped. Also field control showed that this data is relatively fit for using this method in investigation of erosion features and specially, can be applied to identify large erosion features.

  10. Statistical methods for detecting differentially abundant features in clinical metagenomic samples.

    Directory of Open Access Journals (Sweden)

    James Robert White

    2009-04-01

    Full Text Available Numerous studies are currently underway to characterize the microbial communities inhabiting our world. These studies aim to dramatically expand our understanding of the microbial biosphere and, more importantly, hope to reveal the secrets of the complex symbiotic relationship between us and our commensal bacterial microflora. An important prerequisite for such discoveries are computational tools that are able to rapidly and accurately compare large datasets generated from complex bacterial communities to identify features that distinguish them.We present a statistical method for comparing clinical metagenomic samples from two treatment populations on the basis of count data (e.g. as obtained through sequencing to detect differentially abundant features. Our method, Metastats, employs the false discovery rate to improve specificity in high-complexity environments, and separately handles sparsely-sampled features using Fisher's exact test. Under a variety of simulations, we show that Metastats performs well compared to previously used methods, and significantly outperforms other methods for features with sparse counts. We demonstrate the utility of our method on several datasets including a 16S rRNA survey of obese and lean human gut microbiomes, COG functional profiles of infant and mature gut microbiomes, and bacterial and viral metabolic subsystem data inferred from random sequencing of 85 metagenomes. The application of our method to the obesity dataset reveals differences between obese and lean subjects not reported in the original study. For the COG and subsystem datasets, we provide the first statistically rigorous assessment of the differences between these populations. The methods described in this paper are the first to address clinical metagenomic datasets comprising samples from multiple subjects. Our methods are robust across datasets of varied complexity and sampling level. While designed for metagenomic applications, our software

  11. Non-invasive detection of the freezing of gait in Parkinson's disease using spectral and wavelet features.

    Science.gov (United States)

    Nazarzadeh, Kimia; Arjunan, Sridhar P; Kumar, Dinesh K; Das, Debi Prasad

    2016-08-01

    In this study, we have analyzed the accelerometer data recorded during gait analysis of Parkinson disease patients for detecting freezing of gait (FOG) episodes. The proposed method filters the recordings for noise reduction of the leg movement changes and computes the wavelet coefficients to detect FOG events. Publicly available FOG database was used and the technique was evaluated using receiver operating characteristic (ROC) analysis. Results show a higher performance of the wavelet feature in discrimination of the FOG events from the background activity when compared with the existing technique.

  12. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy.

    Science.gov (United States)

    Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-07-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Memory-based detection of rare sound feature combinations in anesthetized rats.

    Science.gov (United States)

    Astikainen, Piia; Ruusuvirta, Timo; Wikgren, Jan; Penttonen, Markku

    2006-10-02

    It is unclear whether the ability of the brain to discriminate rare from frequently repeated combinations of sound features is limited to the normal sleep/wake cycle. We recorded epidural auditory event-related potentials in urethane-anesthetized rats presented with rare tones ('deviants') interspersed with frequently repeated ones ('standards'). Deviants differed from standards either in frequency alone or in frequency combined with intensity. In both cases, deviants elicited event-related potentials exceeding in amplitude event-related potentials to standards between 76 and 108 ms from the stimulus onset, suggesting the independence of the underlying integrative and memory-based change detection mechanisms of the brain from the normal sleep/wake cycle. The relations of these event-related potentials to mismatch negativity and N1 in humans are addressed.

  14. Combined multibeam and LIDAR bathymetry data from eastern Long Island Sound and westernmost Block Island Sound-A regional perspective

    Science.gov (United States)

    Poppe, L.J.; Danforth, W.W.; McMullen, K.Y.; Parker, Castle E.; Doran, E.F.

    2011-01-01

    Detailed bathymetric maps of the sea floor in Long Island Sound are of great interest to the Connecticut and New York research and management communities because of this estuary's ecological, recreational, and commercial importance. The completed, geologically interpreted digital terrain models (DTMs), ranging in area from 12 to 293 square kilometers, provide important benthic environmental information, yet many applications require a geographically broader perspective. For example, individual surveys are of limited use for the planning and construction of cross-sound infrastructure, such as cables and pipelines, or for the testing of regional circulation models. To address this need, we integrated 12 multibeam and 2 LIDAR (Light Detection and Ranging) contiguous bathymetric DTMs, produced by the National Oceanic and Atmospheric Administration during charting operations, into one dataset that covers much of eastern Long Island Sound and extends into westernmost Block Island Sound. The new dataset is adjusted to mean lower low water, is gridded to 4-meter resolution, and is provided in UTM Zone 18 NAD83 and geographic WGS84 projections. This resolution is adequate for sea floor-feature and process interpretation but is small enough to be queried and manipulated with standard Geographic Information System programs and to allow for future growth. Natural features visible in the grid include exposed bedrock outcrops, boulder lag deposits of submerged moraines, sand-wave fields, and scour depressions that reflect the strength of the oscillating and asymmetric tidal currents. Bedform asymmetry allows interpretations of net sediment transport. Anthropogenic artifacts visible in the bathymetric data include a dredged channel, shipwrecks, dredge spoils, mooring anchors, prop-scour depressions, buried cables, and bridge footings. Together the merged data reveal a larger, more continuous perspective of bathymetric topography than previously available, providing a fundamental

  15. Statistical Feature Extraction for Fault Locations in Nonintrusive Fault Detection of Low Voltage Distribution Systems

    Directory of Open Access Journals (Sweden)

    Hsueh-Hsien Chang

    2017-04-01

    Full Text Available This paper proposes statistical feature extraction methods combined with artificial intelligence (AI approaches for fault locations in non-intrusive single-line-to-ground fault (SLGF detection of low voltage distribution systems. The input features of the AI algorithms are extracted using statistical moment transformation for reducing the dimensions of the power signature inputs measured by using non-intrusive fault monitoring (NIFM techniques. The data required to develop the network are generated by simulating SLGF using the Electromagnetic Transient Program (EMTP in a test system. To enhance the identification accuracy, these features after normalization are given to AI algorithms for presenting and evaluating in this paper. Different AI techniques are then utilized to compare which identification algorithms are suitable to diagnose the SLGF for various power signatures in a NIFM system. The simulation results show that the proposed method is effective and can identify the fault locations by using non-intrusive monitoring techniques for low voltage distribution systems.

  16. A feature matching and fusion-based positive obstacle detection algorithm for field autonomous land vehicles

    Directory of Open Access Journals (Sweden)

    Tao Wu

    2017-03-01

    Full Text Available Positive obstacles will cause damage to field robotics during traveling in field. Field autonomous land vehicle is a typical field robotic. This article presents a feature matching and fusion-based algorithm to detect obstacles using LiDARs for field autonomous land vehicles. There are three main contributions: (1 A novel setup method of compact LiDAR is introduced. This method improved the LiDAR data density and reduced the blind region of the LiDAR sensor. (2 A mathematical model is deduced under this new setup method. The ideal scan line is generated by using the deduced mathematical model. (3 Based on the proposed mathematical model, a feature matching and fusion (FMAF-based algorithm is presented in this article, which is employed to detect obstacles. Experimental results show that the performance of the proposed algorithm is robust and stable, and the computing time is reduced by an order of two magnitudes by comparing with other exited algorithms. This algorithm has been perfectly applied to our autonomous land vehicle, which has won the champion in the challenge of Chinese “Overcome Danger 2014” ground unmanned vehicle.

  17. Effective Dysphonia Detection Using Feature Dimension Reduction and Kernel Density Estimation for Patients with Parkinson’s Disease

    Science.gov (United States)

    Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar

    2014-01-01

    Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson’s disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher’s linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406

  18. BLACK HOLE ATTACK IN AODV & FRIEND FEATURES UNIQUE EXTRACTION TO DESIGN DETECTION ENGINE FOR INTRUSION DETECTION SYSTEM IN MOBILE ADHOC NETWORK

    Directory of Open Access Journals (Sweden)

    HUSAIN SHAHNAWAZ

    2012-10-01

    Full Text Available Ad-hoc network is a collection of nodes that are capable to form dynamically a temporary network without the support of any centralized fixed infrastructure. Since there is no central controller to determine the reliable & secure communication paths in Mobile Adhoc Network, each node in the ad hoc network has to rely on each other in order to forward packets, thus highly cooperative nodes are required to ensure that the initiated data transmission process does not fail. In a mobile ad hoc network (MANET where security is a crucial issue and they are forced to rely on the neighbor node, trust plays an important role that could improve the number of successful data transmission. Larger the number of trusted nodes, higher successful data communication process rates could be expected. In this paper, Black Hole attack is applied in the network, statistics are collected to design intrusion detection engine for MANET Intrusion Detection System (IDS. Feature extraction and rule inductions are applied to find out the accuracy of detection engine by using support vector machine. In this paper True Positive generated by the detection engine is very high and this is a novel approach in the area of Mobile Adhoc Intrusion detection system.

  19. Comparison of Different Features and Classifiers for Driver Fatigue Detection Based on a Single EEG Channel

    Directory of Open Access Journals (Sweden)

    Jianfeng Hu

    2017-01-01

    Full Text Available Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG channel. Four types of entropies measures, sample entropy (SE, fuzzy entropy (FE, approximate entropy (AE, and spectral entropy (PE, were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF. The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different.

  20. Applicability of computer-aided comprehensive tool (LINDA: LINeament Detection and Analysis) and shaded digital elevation model for characterizing and interpreting morphotectonic features from lineaments

    Science.gov (United States)

    Masoud, Alaa; Koike, Katsuaki

    2017-09-01

    Detection and analysis of linear features related to surface and subsurface structures have been deemed necessary in natural resource exploration and earth surface instability assessment. Subjectivity in choosing control parameters required in conventional methods of lineament detection may cause unreliable results. To reduce this ambiguity, we developed LINDA (LINeament Detection and Analysis), an integrated tool with graphical user interface in Visual Basic. This tool automates processes of detection and analysis of linear features from grid data of topography (digital elevation model; DEM), gravity and magnetic surfaces, as well as data from remote sensing imagery. A simple interface with five display windows forms a user-friendly interactive environment. The interface facilitates grid data shading, detection and grouping of segments, lineament analyses for calculating strike and dip and estimating fault type, and interactive viewing of lineament geometry. Density maps of the center and intersection points of linear features (segments and lineaments) are also included. A systematic analysis of test DEMs and Landsat 7 ETM+ imagery datasets in the North and South Eastern Deserts of Egypt is implemented to demonstrate the capability of LINDA and correct use of its functions. Linear features from the DEM are superior to those from the imagery in terms of frequency, but both linear features agree with location and direction of V-shaped valleys and dykes and reference fault data. Through the case studies, LINDA applicability is demonstrated to highlight dominant structural trends, which can aid understanding of geodynamic frameworks in any region.

  1. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized

  2. Morphometric Change Detection of Lake Hawassa in the Ethiopian Rift Valley

    Directory of Open Access Journals (Sweden)

    Yonas Abebe

    2018-05-01

    Full Text Available The Ethiopian Rift Valley lakes have been subjected to environmental and ecological changes due to recent development endeavors and natural phenomena, which are visible in the alterations to the quality and quantity of the water resources. Monitoring lakes for temporal and spatial alterations has become a valuable indicator of environmental change. In this regard, hydrographic information has a paramount importance. The first extensive hydrographic survey of Lake Hawassa was conducted in 1999. In this study, a bathymetric map was prepared using advances in global positioning systems, portable sonar sounder technology, geostatistics, remote sensing and geographic information system (GIS software analysis tools with the aim of detecting morphometric changes. Results showed that the surface area of Lake Hawassa increased by 7.5% in 1999 and 3.2% in 2011 from that of 1985. Water volume decreased by 17% between 1999 and 2011. Silt accumulated over more than 50% of the bed surface has caused a 4% loss of the lake’s storage capacity. The sedimentation patterns identified may have been strongly impacted by anthropogenic activities including urbanization and farming practices located on the northern, eastern and western sides of the lake watershed. The study demonstrated this geostatistical modeling approach to be a rapid and cost-effective method for bathymetric mapping.

  3. Comparison of feature extraction methods within a spatio-temporal land cover change detection framework

    CSIR Research Space (South Africa)

    Kleynhans, W

    2011-07-01

    Full Text Available OF FEATURE EXTRACTION METHODS WITHIN A SPATIO-TEMPORAL LAND COVER CHANGE DETECTION FRAMEWORK ??W. Kleynhans,, ??B.P. Salmon, ?J.C. Olivier, ?K.J. Wessels, ?F. van den Bergh ? Electrical, Electronic and Computer Engi- neering University of Pretoria, South... Bergh, and K. Steenkamp, ?Improving land cover class separation using an extended Kalman filter on MODIS NDVI time series data,? IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 2, pp. 381?385, Apr. 2010. ...

  4. The radiological features, diagnosis and management of screen-detected lobular neoplasia of the breast: Findings from the Sloane Project.

    Science.gov (United States)

    Maxwell, Anthony J; Clements, Karen; Dodwell, David J; Evans, Andrew J; Francis, Adele; Hussain, Monuwar; Morris, Julie; Pinder, Sarah E; Sawyer, Elinor J; Thomas, Jeremy; Thompson, Alastair

    2016-06-01

    To investigate the radiological features, diagnosis and management of screen-detected lobular neoplasia (LN) of the breast. 392 women with pure LN alone were identified within the prospective UK cohort study of screen-detected non-invasive breast neoplasia (the Sloane Project). Demography, radiological features and diagnostic and therapeutic procedures were analysed. Non-pleomorphic LN (369/392) was most frequently diagnosed among women aged 50-54 and in 53.5% was at the first screen. It occurred most commonly on the left (58.0%; p = 0.003), in the upper outer quadrant and confined to one site (single quadrant or retroareolar region). No bilateral cases were found. The predominant radiological feature was microcalcification (most commonly granular) which increased in frequency with increasing breast density. Casting microcalcification as a predominant feature had a significantly higher lesion size compared to granular and punctate patterns (p = 0.034). 326/369 (88.3%) women underwent surgery, including 17 who underwent >1 operation, six who had mastectomy and six who had axillary surgery. Two patients had radiotherapy and 15 had endocrine treatment. Pleomorphic lobular carcinoma in situ (23/392) presented as granular microcalcification in 12; four women had mastectomy and six had radiotherapy. Screen-detected LN occurs in relatively young women and is predominantly non-pleomorphic and unilateral. It is typically associated with granular or punctate microcalcification in the left upper outer quadrant. Management, including surgical resection, is highly variable and requires evidence-based guideline development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. DETECTION OF SHARP SYMMETRIC FEATURES IN THE CIRCUMBINARY DISK AROUND AK Sco

    International Nuclear Information System (INIS)

    Janson, Markus; Asensio-Torres, Ruben; Thalmann, Christian; Meyer, Michael R.; Garufi, Antonio; Boccaletti, Anthony; Maire, Anne-Lise; Henning, Thomas; Pohl, Adriana; Zurlo, Alice; Marzari, Francesco; Carson, Joseph C.; Augereau, Jean-Charles; Desidera, Silvano

    2016-01-01

    The Search for Planets Orbiting Two Stars survey aims to study the formation and distribution of planets in binary systems by detecting and characterizing circumbinary planets and their formation environments through direct imaging. With the SPHERE Extreme Adaptive Optics instrument, a good contrast can be achieved even at small (<300 mas) separations from bright stars, which enables studies of planets and disks in a separation range that was previously inaccessible. Here, we report the discovery of resolved scattered light emission from the circumbinary disk around the well-studied young double star AK Sco, at projected separations in the ∼13–40 AU range. The sharp morphology of the imaged feature is surprising, given the smooth appearance of the disk in its spectral energy distribution. We show that the observed morphology can be represented either as a highly eccentric ring around AK Sco, or as two separate spiral arms in the disk, wound in opposite directions. The relative merits of these interpretations are discussed, as well as whether these features may have been caused by one or several circumbinary planets interacting with the disk

  6. DETECTION OF SHARP SYMMETRIC FEATURES IN THE CIRCUMBINARY DISK AROUND AK Sco

    Energy Technology Data Exchange (ETDEWEB)

    Janson, Markus; Asensio-Torres, Ruben [Department of Astronomy, Stockholm University, AlbaNova University Center, SE-106 91 Stockholm (Sweden); Thalmann, Christian; Meyer, Michael R.; Garufi, Antonio [Institute for Astronomy, ETH Zurich, Wolfgang-Pauli-Strasse 27, CH-8093 Zurich (Switzerland); Boccaletti, Anthony [LESIA, Observatoire de Paris—Meudon, CNRS, Université Pierre et Marie Curie, Université Paris Didierot, 5 Place Jules Janssen, F-92195 Meudon (France); Maire, Anne-Lise; Henning, Thomas; Pohl, Adriana [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Zurlo, Alice [Núcleo de Astronomía, Facultad de Ingeniería, Universidad Diego Portales, Av. Ejercito 441, Santiago (Chile); Marzari, Francesco [Dipartimento di Fisica, University of Padova, Via Marzolo 8, I-35131 Padova (Italy); Carson, Joseph C. [Department of Physics and Astronomy, College of Charleston, 66 George Street, Charleston, SC 29424 (United States); Augereau, Jean-Charles [Université Grenoble Alpes, IPAG, F-38000 Grenoble (France); Desidera, Silvano [INAF—Osservatorio Astromonico di Padova, Vicolo dell’Osservatorio 5, I-35122 Padova (Italy)

    2016-01-01

    The Search for Planets Orbiting Two Stars survey aims to study the formation and distribution of planets in binary systems by detecting and characterizing circumbinary planets and their formation environments through direct imaging. With the SPHERE Extreme Adaptive Optics instrument, a good contrast can be achieved even at small (<300 mas) separations from bright stars, which enables studies of planets and disks in a separation range that was previously inaccessible. Here, we report the discovery of resolved scattered light emission from the circumbinary disk around the well-studied young double star AK Sco, at projected separations in the ∼13–40 AU range. The sharp morphology of the imaged feature is surprising, given the smooth appearance of the disk in its spectral energy distribution. We show that the observed morphology can be represented either as a highly eccentric ring around AK Sco, or as two separate spiral arms in the disk, wound in opposite directions. The relative merits of these interpretations are discussed, as well as whether these features may have been caused by one or several circumbinary planets interacting with the disk.

  7. Sea-floor morphology and sedimentary environments in western Block Island Sound, offshore of Fishers Island, New York

    Science.gov (United States)

    McMullen, Katherine Y.; Poppe, Lawrence J.; Danforth, William W.; Blackwood, Dann S.; Winner, William G.; Parker, Castle E.

    2015-01-01

    Multibeam-bathymetric and sidescan-sonar data, collected by the National Oceanic and Atmospheric Administration in a 114-square-kilometer area of Block Island Sound, southeast of Fishers Island, New York, are combined with sediment samples and bottom photography collected by the U.S. Geological Survey from 36 stations in this area in order to interpret sea-floor features and sedimentary environments. These interpretations and datasets provide base maps for studies on benthic ecology and resource management. The geologic features and sedimentary environments on the sea floor are products of the area’s glacial history and modern processes. These features include bedrock, drumlins, boulders, cobbles, large current-scoured bathymetric depressions, obstacle marks, and glaciolacustrine sediments found in high-energy sedimentary environments of erosion or nondeposition; and sand waves and megaripples in sedimentary environments characterized by coarse-grained bedload transport. Trawl marks are preserved in lower energy environments of sorting and reworking. This report releases the multibeam-bathymetric, sidescan-sonar, sediment, and photographic data and interpretations of the features and sedimentary environments in Block Island Sound, offshore Fishers Island.

  8. Multi-feature classifiers for burst detection in single EEG channels from preterm infants

    Science.gov (United States)

    Navarro, X.; Porée, F.; Kuchenbuch, M.; Chavez, M.; Beuchée, Alain; Carrault, G.

    2017-08-01

    Objective. The study of electroencephalographic (EEG) bursts in preterm infants provides valuable information about maturation or prognostication after perinatal asphyxia. Over the last two decades, a number of works proposed algorithms to automatically detect EEG bursts in preterm infants, but they were designed for populations under 35 weeks of post menstrual age (PMA). However, as the brain activity evolves rapidly during postnatal life, these solutions might be under-performing with increasing PMA. In this work we focused on preterm infants reaching term ages (PMA  ⩾36 weeks) using multi-feature classification on a single EEG channel. Approach. Five EEG burst detectors relying on different machine learning approaches were compared: logistic regression (LR), linear discriminant analysis (LDA), k-nearest neighbors (kNN), support vector machines (SVM) and thresholding (Th). Classifiers were trained by visually labeled EEG recordings from 14 very preterm infants (born after 28 weeks of gestation) with 36-41 weeks PMA. Main results. The most performing classifiers reached about 95% accuracy (kNN, SVM and LR) whereas Th obtained 84%. Compared to human-automatic agreements, LR provided the highest scores (Cohen’s kappa  =  0.71) using only three EEG features. Applying this classifier in an unlabeled database of 21 infants  ⩾36 weeks PMA, we found that long EEG bursts and short inter-burst periods are characteristic of infants with the highest PMA and weights. Significance. In view of these results, LR-based burst detection could be a suitable tool to study maturation in monitoring or portable devices using a single EEG channel.

  9. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    Science.gov (United States)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  10. Bathymetric preference of four major genera of rectilinear benthic foraminifera within oxygen minimum zone in Arabian Sea off central west coast of India.

    Digital Repository Service at National Institute of Oceanography (India)

    Mazumder, A.; Nigam, R.

    species, including two species of Bolivina and a single species of Uvigerina, with the bathymetrical variation from the northern Gulf of Mexico. But there is no attempt to correlate the total population of any important genus of rectilinear foraminifera...–900. Malakoff D 1998 Death by suffocation in the Gulf of Mexico; Science 281 190–192. Mallon J, Glock N and Scho¨nfeld J 2012 The response of benthic foraminifera to low-oxygen conditions of the Peruvian oxygen minimum zone; In: ANOXIA: Evidence for eukaryote...

  11. Global seafloor geomorphic features map: applications for ocean conservation and management

    Science.gov (United States)

    Harris, P. T.; Macmillan-Lawler, M.; Rupp, J.; Baker, E.

    2013-12-01

    Seafloor geomorphology, mapped and measured by marine scientists, has proven to be a very useful physical attribute for ocean management because different geomorphic features (eg. submarine canyons, seamounts, spreading ridges, escarpments, plateaus, trenches etc.) are commonly associated with particular suites of habitats and biological communities. Although we now have better bathymetric datasets than ever before, there has been little effort to integrate these data to create an updated map of seabed geomorphic features or habitats. Currently the best available global seafloor geomorphic features map is over 30 years old. A new global seafloor geomorphic features map (GSGM) has been created based on the analysis and interpretation of the SRTM (Shuttle Radar Topography Mission) 30 arc-second (~1 km) global bathymetry grid. The new map includes global spatial data layers for 29 categories of geomorphic features, defined by the International Hydrographic Organisation. The new geomorphic features map will allow: 1) Characterization of bioregions in terms of their geomorphic content (eg. GOODS bioregions, Large Marine Ecosystems (LMEs), ecologically or biologically significant areas (EBSA)); 2) Prediction of the potential spatial distribution of vulnerable marine ecosystems (VME) and marine genetic resources (MGR; eg. associated with hydrothermal vent communities, shelf-incising submarine canyons and seamounts rising to a specified depth); and 3) Characterization of national marine jurisdictions in terms of their inventory of geomorphic features and their global representativeness of features. To demonstrate the utility of the GSGM, we have conducted an analysis of the geomorphic feature content of the current global inventory of marine protected areas (MPAs) to assess the extent to which features are currently represented. The analysis shows that many features have very low representation, for example fans and rises have less than 1 per cent of their total area

  12. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  13. Detection of Double-Compressed H.264/AVC Video Incorporating the Features of the String of Data Bits and Skip Macroblocks

    Directory of Open Access Journals (Sweden)

    Heng Yao

    2017-12-01

    Full Text Available Today’s H.264/AVC coded videos have a high quality, high data-compression ratio. They also have a strong fault tolerance, better network adaptability, and have been widely applied on the Internet. With the popularity of powerful and easy-to-use video editing software, digital videos can be tampered with in various ways. Therefore, the double compression in the H.264/AVC video can be used as a first step in the study of video-tampering forensics. This paper proposes a simple, but effective, double-compression detection method that analyzes the periodic features of the string of data bits (SODBs and the skip macroblocks (S-MBs for all I-frames and P-frames in a double-compressed H.264/AVC video. For a given suspicious video, the SODBs and S-MBs are extracted for each frame. Both features are then incorporated to generate one enhanced feature to represent the periodic artifact of the double-compressed video. Finally, a time-domain analysis is conducted to detect the periodicity of the features. The primary Group of Pictures (GOP size is estimated based on an exhaustive strategy. The experimental results demonstrate the efficacy of the proposed method.

  14. A Widely Applicable Silver Sol for TLC Detection with Rich and Stable SERS Features

    Science.gov (United States)

    Zhu, Qingxia; Li, Hao; Lu, Feng; Chai, Yifeng; Yuan, Yongfang

    2016-04-01

    Thin-layer chromatography (TLC) coupled with surface-enhanced Raman spectroscopy (SERS) has gained tremendous popularity in the study of various complex systems. However, the detection of hydrophobic analytes is difficult, and the specificity still needs to be improved. In this study, a SERS-active non-aqueous silver sol which could activate the analytes to produce rich and stable spectral features was rapidly synthesized. Then, the optimized silver nanoparticles (AgNPs)-DMF sol was employed for TLC-SERS detection of hydrophobic (and also hydrophilic) analytes. SERS performance of this sol was superior to that of traditional Lee-Meisel AgNPs due to its high specificity, acceptable stability, and wide applicability. The non-aqueous AgNPs would be suitable for the TLC-SERS method, which shows great promise for applications in food safety assurance, environmental monitoring, medical diagnoses, and many other fields.

  15. When do letter features migrate? A boundary condition for feature-integration theory.

    Science.gov (United States)

    Butler, B E; Mewhort, D J; Browse, R A

    1991-01-01

    Feature-integration theory postulates that a lapse of attention will allow letter features to change position and to recombine as illusory conjunctions (Treisman & Paterson, 1984). To study such errors, we used a set of uppercase letters known to yield illusory conjunctions in each of three tasks. The first, a bar-probe task, showed whole-character mislocations but not errors based on feature migration and recombination. The second, a two-alternative forced-choice detection task, allowed subjects to focus on the presence or absence of subletter features and showed illusory conjunctions based on feature migration and recombination. The third was also a two-alternative forced-choice detection task, but we manipulated the subjects' knowledge of the shape of the stimuli: In the case-certain condition, the stimuli were always in uppercase, but in the case-uncertain condition, the stimuli could appear in either upper- or lowercase. Subjects in the case-certain condition produced illusory conjunctions based on feature recombination, whereas subjects in the case-uncertain condition did not. The results suggest that when subjects can view the stimuli as feature groups, letter features regroup as illusory conjunctions; when subjects encode the stimuli as letters, whole items may be mislocated, but subletter features are not. Thus, illusory conjunctions reflect the subject's processing strategy, rather than the architecture of the visual system.

  16. Possible detection of an emission feature near 584 A in the direction of G191-B2B

    Science.gov (United States)

    Green, James; Bowyer, Stuart; Jelinsky, Patrick

    1990-01-01

    A possible spectral emission feature is reported in the direction of the nearby hot white dwarf G191-B2B at 581.5 + or - 6 A with a significance of 3.8 sigma. This emission has been identified as He I 584.3 A. The emission cannot be due to local geocoronal emission or interplanetary backscatter of solar He I 584 A emission because the feature is not detected in a nearby sky exposure. Possible sources for this emission are examined, including the photosphere of G191-B2B, the comparison star G191-B2A, and a possible nebulosity near or around G191-B2B. The parameters required to explain the emission are derived for each case. All of these explanations require unexpected physical conditions; hence we believe this result must receive confirming verification despite the statistical likelihood of the detection.

  17. Possible detection of an emission feature near 584 A in the direction of G191-B2B

    International Nuclear Information System (INIS)

    Green, J.; Bowyer, S.; Jelinsky, P.

    1990-01-01

    A possible spectral emission feature is reported in the direction of the nearby hot white dwarf G191-B2B at 581.5 + or - 6 A with a significance of 3.8 sigma. This emission has been identified as He I 584.3 A. The emission cannot be due to local geocoronal emission or interplanetary backscatter of solar He I 584 A emission because the feature is not detected in a nearby sky exposure. Possible sources for this emission are examined, including the photosphere of G191-B2B, the comparison star G191-B2A, and a possible nebulosity near or around G191-B2B. The parameters required to explain the emission are derived for each case. All of these explanations require unexpected physical conditions; hence we believe this result must receive confirming verification despite the statistical likelihood of the detection. 15 refs

  18. Automatic Railway Traffic Object Detection System Using Feature Fusion Refine Neural Network under Shunting Mode

    Directory of Open Access Journals (Sweden)

    Tao Ye

    2018-06-01

    Full Text Available Many accidents happen under shunting mode when the speed of a train is below 45 km/h. In this mode, train attendants observe the railway condition ahead using the traditional manual method and tell the observation results to the driver in order to avoid danger. To address this problem, an automatic object detection system based on convolutional neural network (CNN is proposed to detect objects ahead in shunting mode, which is called Feature Fusion Refine neural network (FR-Net. It consists of three connected modules, i.e., the depthwise-pointwise convolution, the coarse detection module, and the object detection module. Depth-wise-pointwise convolutions are used to improve the detection in real time. The coarse detection module coarsely refine the locations and sizes of prior anchors to provide better initialization for the subsequent module and also reduces search space for the classification, whereas the object detection module aims to regress accurate object locations and predict the class labels for the prior anchors. The experimental results on the railway traffic dataset show that FR-Net achieves 0.8953 mAP with 72.3 FPS performance on a machine with a GeForce GTX1080Ti with the input size of 320 × 320 pixels. The results imply that FR-Net takes a good tradeoff both on effectiveness and real time performance. The proposed method can meet the needs of practical application in shunting mode.

  19. Research on Copy-Move Image Forgery Detection Using Features of Discrete Polar Complex Exponential Transform

    Science.gov (United States)

    Gan, Yanfen; Zhong, Junliu

    2015-12-01

    With the aid of sophisticated photo-editing software, such as Photoshop, copy-move image forgery operation has been widely applied and has become a major concern in the field of information security in the modern society. A lot of work on detecting this kind of forgery has gained great achievements, but the detection results of geometrical transformations of copy-move regions are not so satisfactory. In this paper, a new method based on the Polar Complex Exponential Transform is proposed. This method addresses issues in image geometric moment, focusing on constructing rotation invariant moment and extracting features of the rotation invariant moment. In order to reduce rounding errors of the transform from the Polar coordinate system to the Cartesian coordinate system, a new transformation method is presented and discussed in detail at the same time. The new method constructs a 9 × 9 shrunk template to transform the Cartesian coordinate system back to the Polar coordinate system. It can reduce transform errors to a much greater degree. Forgery detection, such as copy-move image forgery detection, is a difficult procedure, but experiments prove our method is a great improvement in detecting and identifying forgery images affected by the rotated transform.

  20. Novel Feature Modelling the Prediction and Detection of sEMG Muscle Fatigue towards an Automated Wearable System

    Directory of Open Access Journals (Sweden)

    Mohamed R. Al-Mulla

    2010-05-01

    Full Text Available Surface Electromyography (sEMG activity of the biceps muscle was recorded from ten subjects performing isometric contraction until fatigue. A novel feature (1D spectro_std was used to extract the feature that modeled three classes of fatigue, which enabled the prediction and detection of fatigue. Initial results of class separation were encouraging, discriminating between the three classes of fatigue, a longitudinal classification on Non-Fatigue and Transition-to-Fatigue shows 81.58% correct classification with accuracy 0.74 of correct predictions while the longitudinal classification on Transition-to-Fatigue and Fatigue showed lower average correct classification of 66.51% with a positive classification accuracy 0.73 of correct prediction. Comparison of the 1D spectro_std with other sEMG fatigue features on the same dataset show a significant improvement in classification, where results show a significant 20.58% (p < 0.01 improvement when using the 1D spectro_std to classify Non-Fatigue and Transition-to-Fatigue. In classifying Transition-to-Fatigue and Fatigue results also show a significant improvement over the other features giving 8.14% (p < 0.05 on average of all compared features.

  1. Game Theoretic Approach for Systematic Feature Selection; Application in False Alarm Detection in Intensive Care Units

    Directory of Open Access Journals (Sweden)

    Fatemeh Afghah

    2018-03-01

    Full Text Available Intensive Care Units (ICUs are equipped with many sophisticated sensors and monitoring devices to provide the highest quality of care for critically ill patients. However, these devices might generate false alarms that reduce standard of care and result in desensitization of caregivers to alarms. Therefore, reducing the number of false alarms is of great importance. Many approaches such as signal processing and machine learning, and designing more accurate sensors have been developed for this purpose. However, the significant intrinsic correlation among the extracted features from different sensors has been mostly overlooked. A majority of current data mining techniques fail to capture such correlation among the collected signals from different sensors that limits their alarm recognition capabilities. Here, we propose a novel information-theoretic predictive modeling technique based on the idea of coalition game theory to enhance the accuracy of false alarm detection in ICUs by accounting for the synergistic power of signal attributes in the feature selection stage. This approach brings together techniques from information theory and game theory to account for inter-features mutual information in determining the most correlated predictors with respect to false alarm by calculating Banzhaf power of each feature. The numerical results show that the proposed method can enhance classification accuracy and improve the area under the ROC (receiver operating characteristic curve compared to other feature selection techniques, when integrated in classifiers such as Bayes-Net that consider inter-features dependencies.

  2. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    Science.gov (United States)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  3. [Spectral features analysis of Pinus massoniana with pest of Dendrolimus punctatus Walker and levels detection].

    Science.gov (United States)

    Xu, Zhang-Hua; Liu, Jian; Yu, Kun-Yong; Gong, Cong-Hong; Xie, Wan-Jun; Tang, Meng-Ya; Lai, Ri-Wen; Li, Zeng-Lu

    2013-02-01

    Taking 51 field measured hyperspectral data with different pest levels in Yanping, Fujian Province as objects, the spectral reflectance and first derivative features of 4 levels of healthy, mild, moderate and severe insect pest were analyzed. On the basis of 7 detecting parameters construction, the pest level detecting models were built. The results showed that (1) the spectral reflectance of Pinus massoniana with pests were significantly lower than that of healthy state, and the higher the pest level, the lower the reflectance; (2) with the increase in pest level, the spectral reflectance curves' "green peak" and "red valley" of Pinus massoniana gradually disappeared, and the red edge was leveleds (3) the pest led to spectral "green peak" red shift, red edge position blue shift, but the changes in "red valley" and near-infrared position were complicated; (4) CARI, RES, REA and REDVI were highly relevant to pest levels, and the correlations between REP, RERVI, RENDVI and pest level were weak; (5) the multiple linear regression model with the variables of the 7 detection parameters could effectively detect the pest levels of Dendrolimus punctatus Walker, with both the estimation rate and accuracy above 0.85.

  4. The fast detection of rare auditory feature conjunctions in the human brain as revealed by cortical gamma-band electroencephalogram.

    Science.gov (United States)

    Ruusuvirta, T; Huotilainen, M

    2005-01-01

    Natural environments typically contain temporal scatters of sounds emitted from multiple sources. The sounds may often physically stand out from one another in their conjoined rather than simple features. This poses a particular challenge for the brain to detect which of these sounds are rare and, therefore, potentially important for survival. We recorded gamma-band (32-40 Hz) electroencephalographic (EEG) oscillations from the scalp of adult humans who passively listened to a repeated tone carrying frequent and rare conjunctions of its frequency and intensity. EEG oscillations that this tone induced, rather than evoked, differed in amplitude between the two conjunction types within the 56-ms analysis window from tone onset. Our finding suggests that, perhaps with the support of its non-phase-locked synchrony in the gamma band, the human brain is able to detect rare sounds as feature conjunctions very rapidly.

  5. MRI features of placenta accreta

    International Nuclear Information System (INIS)

    Cao Manrui; Du Mu; Huang Yi; Liu Bingguang; Zhang Fangjing; Guo Jimin; Zhu Zhijun

    2012-01-01

    Objective: To investigate the MRI features of placenta accreta. Methods: From Apr 2009 to Jun 2011, 15 patients with placenta accrete received MRI examination. In them, placenta accreta was diagnosed based on clinical manifestations or postoperative histopathology. The MR features of placenta accreta in them (study group) were retrospectively analyzed and compared with those in 15 pregnant women without placenta accreta (control group) with Fisher exact test. Results: In the 15 patients with placenta accreta,uterine bulging and (or) a focal outward contour bulge was detected in 14 patients; heterogeneous signal intensity in the placenta was detected in 15 patients; dark intraplacental bands on T 2 -weighted images was detected in 15 patients; and increased subplacental vascularity was detected in 11 patients on T 1 - weighted images. In the study group, 14 patients showed at least three of the above four features, and in all of them uterine bulging and (or) a focal outward contour bulge, heterogeneous signal intensity in the placenta and dark intraplacental bands on T 2 -weighted images were detected; one patient showed heterogeneous signal intensity in the placenta, dark intraplacental bands on T 2 -weighted images and increased subplacental vascularity. In the control group,none patient had three of the above features.Uterine bulging and (or) a focal outward contour bulge, heterogeneous signal intensity in the placenta, dark intraplacental bands on T 2 -weighted images and increased subplacental vascularity were detected in 3, 6, 3 and 4 patients (P=0.000, 0.001, 0.000 and 0.027), respectively. Conclusions: The main MRI features of placenta accreta are uterine bulging and (or) a focal outward contour bulge, heterogeneous signal intensity in the placenta and dark intraplacental bands on T 2 -weighted images Besides, increased subplacental vascularity also could provide useful information for the diagnosis of placenta accreta. (authors)

  6. A Novel Real-Time Feature Matching Scheme

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2014-02-01

    Full Text Available Affine Scale Invariant Feature Transform (ASIFT can obtain fully affine invariance, however, its time cost reaches about twice that in Scale Invariant Feature Transform (SIFT. We propose an improved ASIFT algorithm based on feature points in scale space for real-time application. In order to detect the affine invariant feature point, we establish a second-order difference of Gaussian (DOG pyramid and replace the extreme detection in the DOG pyramid by zero detection in the proposed second-order DOG pyramid, which decreases the complexity of the scheme. Experimental results show that the proposed method has a big progress in the real-time performance compared to the traditional one, while preserving the fully affine invariance and precision.

  7. A Novel Ship Detection Method Based on Gradient and Integral Feature for Single-Polarization Synthetic Aperture Radar Imagery

    Directory of Open Access Journals (Sweden)

    Hao Shi

    2018-02-01

    Full Text Available With the rapid development of remote sensing technologies, SAR satellites like China’s Gaofen-3 satellite have more imaging modes and higher resolution. With the availability of high-resolution SAR images, automatic ship target detection has become an important topic in maritime research. In this paper, a novel ship detection method based on gradient and integral features is proposed. This method is mainly composed of three steps. First, in the preprocessing step, a filter is employed to smooth the clutters and the smoothing effect can be adaptive adjusted according to the statistics information of the sub-window. Thus, it can retain details while achieving noise suppression. Second, in the candidate area extraction, a sea-land segmentation method based on gradient enhancement is presented. The integral image method is employed to accelerate computation. Finally, in the ship target identification step, a feature extraction strategy based on Haar-like gradient information and a Radon transform is proposed. This strategy decreases the number of templates found in traditional Haar-like methods. Experiments were performed using Gaofen-3 single-polarization SAR images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. In addition, this method has the potential for on-board processing.

  8. Intrusion detection model using fusion of chi-square feature selection and multi class SVM

    Directory of Open Access Journals (Sweden)

    Ikram Sumaiya Thaseen

    2017-10-01

    Full Text Available Intrusion detection is a promising area of research in the domain of security with the rapid development of internet in everyday life. Many intrusion detection systems (IDS employ a sole classifier algorithm for classifying network traffic as normal or abnormal. Due to the large amount of data, these sole classifier models fail to achieve a high attack detection rate with reduced false alarm rate. However by applying dimensionality reduction, data can be efficiently reduced to an optimal set of attributes without loss of information and then classified accurately using a multi class modeling technique for identifying the different network attacks. In this paper, we propose an intrusion detection model using chi-square feature selection and multi class support vector machine (SVM. A parameter tuning technique is adopted for optimization of Radial Basis Function kernel parameter namely gamma represented by ‘ϒ’ and over fitting constant ‘C’. These are the two important parameters required for the SVM model. The main idea behind this model is to construct a multi class SVM which has not been adopted for IDS so far to decrease the training and testing time and increase the individual classification accuracy of the network attacks. The investigational results on NSL-KDD dataset which is an enhanced version of KDDCup 1999 dataset shows that our proposed approach results in a better detection rate and reduced false alarm rate. An experimentation on the computational time required for training and testing is also carried out for usage in time critical applications.

  9. Groundwater fluxes in a shallow seasonal wetland pond: The effect of bathymetric uncertainty on predicted water and solute balances

    Science.gov (United States)

    Trigg, Mark A.; Cook, Peter G.; Brunner, Philip

    2014-09-01

    The successful management of groundwater dependent shallow seasonal wetlands requires a sound understanding of groundwater fluxes. However, such fluxes are hard to quantify. Water volume and solute mass balance models can be used in order to derive an estimate of groundwater fluxes within such systems. This approach is particularly attractive, as it can be undertaken using measurable environmental variables, such as; rainfall, evaporation, pond level and salinity. Groundwater fluxes estimated from such an approach are subject to uncertainty in the measured variables as well as in the process representation and in parameters within the model. However, the shallow nature of seasonal wetland ponds means water volume and surface area can change rapidly and non-linearly with depth, requiring an accurate representation of the wetland pond bathymetry. Unfortunately, detailed bathymetry is rarely available and simplifying assumptions regarding the bathymetry have to be made. However, the implications of these assumptions are typically not quantified. We systematically quantify the uncertainty implications for eight different representations of wetland bathymetry for a shallow seasonal wetland pond in South Australia. The predictive uncertainty estimation methods provided in the Model-Independent Parameter Estimation and Uncertainty Analysis software (PEST) are used to quantify the effect of bathymetric uncertainty on the modelled fluxes. We demonstrate that bathymetry can be successfully represented within the model in a simple parametric form using a cubic Bézier curve, allowing an assessment of bathymetric uncertainty due to measurement error and survey detail on the derived groundwater fluxes compared with the fixed bathymetry models. Findings show that different bathymetry conceptualisations can result in very different mass balance components and hence process conceptualisations, despite equally good fits to observed data, potentially leading to poor management

  10. Interpretation of bathymetric and magnetic data from the easternmost segment of Australian-Antarctic Ridge, 156°-161°E

    Science.gov (United States)

    Choi, H.; Kim, S.; Park, S.

    2013-12-01

    From 2011 to 2013, Korea Polar Research Institute (KOPRI) conducted a series of geophysical and geochemical expeditions on the longest and easternmost segment of Australian-Antarctic Ridge, located at 61°-63°S, and 156°-161°E. This ridge segment plays an important role in constraining the tectonics of the Antarctic plate. Using IBRV ARAON, the detailed bathymetric data and eleven total magnetic profiles were collected. The studied ridge has spread NNW-SSE direction and tends to be shallower to the west and deeper to the east. The western side of the ridge (156°-157.50°E) shows a broad axial high and a plenty of seamounts as an indicative of massive volcanism. Near the center of the ridge (158°-159°E), a seamount chain is formed stretching toward the south from the ridge. Also, the symmetric seafloor fabric is clearly observed at the eastern portion (158.50°-160°E) of the seamount chain. From the topographic change along the ridge axis, the western part of the ridge appears to have a sufficient magma supply. On the contrary, the eastern side of the ridge (160°-161°E) is characterized by axial valley and relatively deeper depth. Nevertheless, the observed total magnetic field anomalies exhibit symmetric patterns across the ridge axis. Although there have not been enough magnetic survey lines, the spreading rates of the ridge are estimated as the half-spreading rate of 37.7 mm/y and 35.3 mm/y for the western portion of the ridge and 42.3 mm/y for the eastern portion. The studied ridge can be categorized as an intermediate spreading ridge, confirming previous studies based on the spreading rate of global ridge system. Here we will present the preliminary results on bathymetric changes along the ridge axis and its relationship with melt supply distribution, and detailed magnetic properties of the ridge constrained by the observed total field anomalies.

  11. Detection of Seed Methods for Quantification of Feature Confinement

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Bouwers, Eric; Jørgensen, Bo Nørregaard

    2012-01-01

    The way features are implemented in source code has a significant influence on multiple quality aspects of a software system. Hence, it is important to regularly evaluate the quality of feature confinement. Unfortunately, existing approaches to such measurement rely on expert judgement for tracin...

  12. Nonlinear Heart Rate Variability features for real-life stress detection. Case study: students under stress due to university examination.

    Science.gov (United States)

    Melillo, Paolo; Bracale, Marcello; Pecchia, Leandro

    2011-11-07

    This study investigates the variations of Heart Rate Variability (HRV) due to a real-life stressor and proposes a classifier based on nonlinear features of HRV for automatic stress detection. 42 students volunteered to participate to the study about HRV and stress. For each student, two recordings were performed: one during an on-going university examination, assumed as a real-life stressor, and one after holidays. Nonlinear analysis of HRV was performed by using Poincaré Plot, Approximate Entropy, Correlation dimension, Detrended Fluctuation Analysis, Recurrence Plot. For statistical comparison, we adopted the Wilcoxon Signed Rank test and for development of a classifier we adopted the Linear Discriminant Analysis (LDA). Almost all HRV features measuring heart rate complexity were significantly decreased in the stress session. LDA generated a simple classifier based on the two Poincaré Plot parameters and Approximate Entropy, which enables stress detection with a total classification accuracy, a sensitivity and a specificity rate of 90%, 86%, and 95% respectively. The results of the current study suggest that nonlinear HRV analysis using short term ECG recording could be effective in automatically detecting real-life stress condition, such as a university examination.

  13. Focused fluid-flow processes through high-quality bathymetric, 2D seismic and Chirp data from the southern parts of the Bay of Biscay, France

    Science.gov (United States)

    Baudon, Catherine; Gillet, Hervé; Cremer, Michel

    2013-04-01

    High-quality bathymetric, 2D seismic and Chirp data located in the southern parts of the Bay of Biscay, France, collected by the University of Bordeaux 1 (Cruises ITSAS 2, 2001; PROSECAN 3, 2006 and SARGASS, 2010) have recently been compiled. The survey area widely covers the Capbreton Canyon, which lies on the boundary between two major structural zones: the Aquitanian passive margin to the North, and the Basque-Cantabrian margin to the South which corresponds to the offshore Pyrenean front. The dataset revealed a large number of key seafloor features potentially associated with focused fluid-flow processes and subsurface sediment-remobilization. Focused fluid migration through sub-seabed sediments is a common phenomenon on continental margins worldwide and has widespread implications from both industrial and fundamental perspectives, from seafloor marine environmental issues to petroleum exploration and hazard assessments. Our study analyses the relationships between seafloor features, deeper structures and fluid migration through the Plio-Quaternary sedimentary pile. The geometrical characteristics, mechanisms of formation and kinematics of four main groups of seabed features have been investigated. (i) A 150km2 field of pockmarks can be observed on the Basque margin. These features are cone-shaped circular or elliptical depressions that are either randomly distributed as small pockmarks (diameter < 20m) or aligned in trains of large pockmarks (ranging from 200 to 600m in diameter) along shallow troughs leading downstream to the Capbreton Canyon. Seismic data show that most pockmarks reach the seabed through vertically staked V-shaped features but some are buried and show evidence of lateral migration through time. (ii) A second field of widely-spaced groups of pockmarks pierce the upper slope of the Aquitanian margin. These depressions are typically a few hundred meters in diameter and seem to be preferentially located in the troughs or on the stoss sides of

  14. Volcanic features of Io

    International Nuclear Information System (INIS)

    Carr, M.H.; Masursky, H.; Strom, R.G.; Terrile, R.J.

    1979-01-01

    The volcanic features of Io as detected during the Voyager mission are discussed. The volcanic activity is apparently higher than on any other body in the Solar System. Its volcanic landforms are compared with features on Earth to indicate the type of volcanism present on Io. (U.K.)

  15. Vehicle Color Recognition with Vehicle-Color Saliency Detection and Dual-Orientational Dimensionality Reduction of CNN Deep Features

    Science.gov (United States)

    Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang

    2017-12-01

    Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.

  16. A simple optimization can improve the performance of single feature polymorphism detection by Affymetrix expression arrays

    Directory of Open Access Journals (Sweden)

    Fujisawa Hironori

    2010-05-01

    Full Text Available Abstract Background High-density oligonucleotide arrays are effective tools for genotyping numerous loci simultaneously. In small genome species (genome size: Results We compared the single feature polymorphism (SFP detection performance of whole-genome and transcript hybridizations using the Affymetrix GeneChip® Rice Genome Array, using the rice cultivars with full genome sequence, japonica cultivar Nipponbare and indica cultivar 93-11. Both genomes were surveyed for all probe target sequences. Only completely matched 25-mer single copy probes of the Nipponbare genome were extracted, and SFPs between them and 93-11 sequences were predicted. We investigated optimum conditions for SFP detection in both whole genome and transcript hybridization using differences between perfect match and mismatch probe intensities of non-polymorphic targets, assuming that these differences are representative of those between mismatch and perfect targets. Several statistical methods of SFP detection by whole-genome hybridization were compared under the optimized conditions. Causes of false positives and negatives in SFP detection in both types of hybridization were investigated. Conclusions The optimizations allowed a more than 20% increase in true SFP detection in whole-genome hybridization and a large improvement of SFP detection performance in transcript hybridization. Significance analysis of the microarray for log-transformed raw intensities of PM probes gave the best performance in whole genome hybridization, and 22,936 true SFPs were detected with 23.58% false positives by whole genome hybridization. For transcript hybridization, stable SFP detection was achieved for highly expressed genes, and about 3,500 SFPs were detected at a high sensitivity (> 50% in both shoot and young panicle transcripts. High SFP detection performances of both genome and transcript hybridizations indicated that microarrays of a complex genome (e.g., of Oryza sativa can be

  17. Structural and stratigraphic constraints on tsunamigenic rupture along the frontal Sunda megathrust from MegaTera bathymetric and seismic reflection data

    Science.gov (United States)

    Bradley, K. E.; Qin, Y.; Villanueva-Robles, F.; Hananto, N.; Leclerc, F.; Singh, S. C.; Tapponnier, P.; Sieh, K.; Wei, S.; Carton, H. D.; Permana, H.; Avianto, P.; Nugroho, A. B.

    2017-12-01

    The joint EOS/IPG/LIPI 2015 MegaTera expedition collected high-resolution seismic reflection profiles and bathymetric data across the Sunda trench, updip of the Mw7.7, 2010 Mentawai tsunami-earthquake rupture patch. These data reveal rapid lateral variations in both the stratigraphic level of the frontal Sunda megathrust and the vergence of frontal ramp faults. The stratigraphic depth of the megathrust at the deformation front correlates with ramp-thrust vergence and with changes in the basal friction angle inferred by critical-taper wedge theory. Where ramp thrusts verge uniformly seaward and have an average dip of 30°, the megathrust decollement resides atop a high-amplitude reflector that marks the inferred top of pelagic sediments. Where ramp thrusts are bi-vergent (similar throw on both landward- and seaward-vergent faults) and have an average dip of 42°, the decollement is higher, within the incoming clastic sequence, above a seismically transparent unit inferred to represent distal fan muds. Where ramp thrusts are uniformly landward vergent, the decollement sits directly on top of the oceanic crust that forms the bathymetrically prominent, subducting Investigator Ridge. The two, separate regions of large tsunamigenic ground-surface uplift during the 2010 tsunami earthquake that have been inferred from joint inversions of seismic, GPS, and tsunami data (e.g. Yue et al., 2014; Satake et al., 2013) correspond to the areas of frontal bi-vergence in the MegaTera data. We propose that enhanced surface uplift and tsunamigenesis during this event occurred when rupture propagated onto areas where the decollement sits directly above the basal muds of the incoming clastic sequence. Thus we hypothesize that frontal bi-vergence may mark areas of enhanced tsunami hazard posed by small magnitude, shallow megathrust ruptures that propagate to the trench. [Yue, H. et al., 2014, Rupture process of the…, JGR 119 doi:10.1002/2014JB011082; Satake, K. et al., 2013, Tsunami

  18. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    Science.gov (United States)

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  19. A robust segmentation approach based on analysis of features for defect detection in X-ray images of aluminium castings

    DEFF Research Database (Denmark)

    Lecomte, G.; Kaftandjian, V.; Cendre, Emmanuelle

    2007-01-01

    A robust image processing algorithm has been developed for detection of small and low contrasted defects, adapted to X-ray images of castings having a non-uniform background. The sensitivity to small defects is obtained at the expense of a high false alarm rate. We present in this paper a feature...... three parameters and taking into account the fact that X-ray grey-levels follow a statistical normal law. Results are shown on a set of 684 images, involving 59 defects, on which we obtained a 100% detection rate without any false alarm....

  20. Representation of Block-Based Image Features in a Multi-Scale Framework for Built-Up Area Detection

    Directory of Open Access Journals (Sweden)

    Zhongwen Hu

    2016-02-01

    Full Text Available The accurate extraction and mapping of built-up areas play an important role in many social, economic, and environmental studies. In this paper, we propose a novel approach for built-up area detection from high spatial resolution remote sensing images, using a block-based multi-scale feature representation framework. First, an image is divided into small blocks, in which the spectral, textural, and structural features are extracted and represented using a multi-scale framework; a set of refined Harris corner points is then used to select blocks as training samples; finally, a built-up index image is obtained by minimizing the normalized spectral, textural, and structural distances to the training samples, and a built-up area map is obtained by thresholding the index image. Experiments confirm that the proposed approach is effective for high-resolution optical and synthetic aperture radar images, with different scenes and different spatial resolutions.

  1. Mapping river bathymetry with a small footprint green LiDAR: Applications and challenges

    Science.gov (United States)

    Kinzel, Paul J.; Legleiter, Carl; Nelson, Jonathan M.

    2013-01-01

    Airborne bathymetric Light Detection And Ranging (LiDAR) systems designed for coastal and marine surveys are increasingly sought after for high-resolution mapping of fluvial systems. To evaluate the potential utility of bathymetric LiDAR for applications of this kind, we compared detailed surveys collected using wading and sonar techniques with measurements from the United States Geological Survey’s hybrid topographic⁄ bathymetric Experimental Advanced Airborne Research LiDAR (EAARL). These comparisons, based upon data collected from the Trinity and Klamath Rivers, California, and the Colorado River, Colorado, demonstrated

  2. Numerical Analysis for Relevant Features in Intrusion Detection (NARFid)

    Science.gov (United States)

    2009-03-01

    Error and Average Correlation Coefficient. Mucciardi and Gose [63] discuss seven methods for selecting features. These methods seek to overcome the...POEmaxPOEmin). (2.37) With each iteration of selecting the next feature, ACC is also normalized in the same fashion. As stated by Mucciardi and Gose ...lan’s discussion [70] as described in Section 2.3.1. Mucciardi and Gose [63] provide the POEACC parameters that perform well in their experiments. As

  3. EEG machine learning with Higuchi fractal dimension and Sample Entropy as features for successful detection of depression

    OpenAIRE

    Cukic, Milena; Pokrajac, David; Stokic, Miodrag; Simic, slobodan; Radivojevic, Vlada; Ljubisavljevic, Milos

    2018-01-01

    Reliable diagnosis of depressive disorder is essential for both optimal treatment and prevention of fatal outcomes. In this study, we aimed to elucidate the effectiveness of two non-linear measures, Higuchi Fractal Dimension (HFD) and Sample Entropy (SampEn), in detecting depressive disorders when applied on EEG. HFD and SampEn of EEG signals were used as features for seven machine learning algorithms including Multilayer Perceptron, Logistic Regression, Support Vector Machines with the linea...

  4. Nonlinear features identified by Volterra series for damage detection in a buckled beam

    Directory of Open Access Journals (Sweden)

    Shiki S. B.

    2014-01-01

    Full Text Available The present paper proposes a new index for damage detection based on nonlinear features extracted from prediction errors computed by multiple convolutions using the discrete-time Volterra series. A reference Volterra model is identified with data in the healthy condition and used for monitoring the system operating with linear or nonlinear behavior. When the system has some structural change, possibly associated with damage, the index metrics computed could give an alert to separate the linear and nonlinear contributions, besides provide a diagnostic about the structural state. To show the applicability of the method, an experimental test is performed using nonlinear vibration signals measured in a clamped buckled beam subject to different levels of force applied and with simulated damages through discontinuities inserted in the beam surface.

  5. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  6. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  7. EVALUATION OF A NOVEL UAV-BORNE TOPO-BATHYMETRIC LASER PROFILER

    Directory of Open Access Journals (Sweden)

    G. Mandlburger

    2016-06-01

    Full Text Available We present a novel topo-bathymetric laser profiler. The sensor system (RIEGL BathyCopter comprises a laser range finder, an Inertial Measurement Unit (IMU, a Global Navigation Satellite System (GNSS receiver, a control unit, and digital cameras mounted on an octocopter UAV (RiCOPTER. The range finder operates on the time-of-flight measurement principle and utilizes very short laser pulses (<1 ns in the green domain of the spectrum (λ=532 nm for measuring distances to both the water surface and the river bottom. For assessing the precision and accuracy of the system an experiment was carried out in October 2015 at a pre-alpine river (Pielach in Lower Austria. A 200 m longitudinal section and 12 river cross sections were measured with the BathyCopter sensor system at a flight altitude of 15-20 m above ground level and a measurement rate of 4 kHz. The 3D laser profiler points were compared with independent, quasi-simultaneous data acquisitions using (i the RIEGL VUX1-UAV lightweight topographic laser scanning system (bare earth, water surface and (ii terrestrial survey (river bed. Over bare earth the laser profiler heights have a std. dev. of 3 cm, the water surface height appears to be underestimated by 5 cm, and river bottom heights differ from the reference measurements by 10 cm with a std. dev. of 13 cm. When restricting the comparison to laser profiler bottom points and reference measurements with a lateral offset below 1 m, the values improve to 4 cm bias with a std. dev. of 6 cm. We report additionally on challenges in comparing UAV-borne to terrestrial profiles. Based on the accuracy and the small footprint (3.5 cm at the water surface we concluded that the acquired 3D points can potentially serve as input data (river bed geometry, grain roughness and validation data (water surface, water depth for hydrodynamic-numerical models.

  8. Retinal microaneurysms detection using local convergence index features

    NARCIS (Netherlands)

    Dashtbozorg, B.; Zhang, J.; Huang, F.; ter Haar Romeny, B.M.

    2018-01-01

    Retinal microaneurysms (MAs) are the earliest clinical sign of diabetic retinopathy disease. Detection of microaneurysms is crucial for the early diagnosis of diabetic retinopathy and prevention of blindness. In this paper, a novel and reliable method for automatic detection of microaneurysms in

  9. Retinal microaneurysms detection using local convergence index features

    NARCIS (Netherlands)

    Dasht Bozorg, B.; Zhang, J.; ter Haar Romeny, B.M.

    2017-01-01

    Retinal microaneurysms are the earliest clinical sign of diabetic retinopathy disease. Detection of microaneurysms is crucial for the early diagnosis of diabetic retinopathy and prevention of blindness. In this paper, a novel and reliable method for automatic detection of microaneurysms in retinal

  10. The diagnostic performance of radiography for detection of osteoarthritis-associated features compared with MRI in hip joints with chronic pain

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Li [Boston University School of Medicine, Quantitative Imaging Center, Department of Radiology, Boston, MA (United States); Beijing Jishuitan Hospital, Department of Radiology, Beijing (China); Hayashi, Daichi; Guermazi, Ali [Boston University School of Medicine, Quantitative Imaging Center, Department of Radiology, Boston, MA (United States); Hunter, David J. [University of Sydney, Department of Medicine, Sydney (Australia); Li, Ling [New England Baptist Hospital, Division of Research, Boston, MA (United States); Winterstein, Anton; Bohndorf, Klaus [Klinikum Augsburg, Department of Radiology, Augsburg (Germany); Roemer, Frank W. [Boston University School of Medicine, Quantitative Imaging Center, Department of Radiology, Boston, MA (United States); Klinikum Augsburg, Department of Radiology, Augsburg (Germany); University of Erlangen, Department of Radiology, Erlangen (Germany)

    2013-10-15

    To evaluate the diagnostic performance of radiography for the detection of MRI-detected osteoarthritis-associated features in various articular subregions of the hip joint. Forty-four patients with chronic hip pain (mean age, 63.3 {+-} 9.5 years), who were part of the Hip Osteoarthritis MRI Scoring (HOAMS) cohort, underwent both weight-bearing anteroposterior pelvic radiography and 1.5 T MRI. The HOAMS study was a prospective observational study involving 52 subjects, conducted to develop a semiquantitative MRI scoring system for hip osteoarthritis features. In the present study, eight subjects were excluded because of a lack of radiographic assessment. On radiography, the presence of superior and medial joint space narrowing, superior and inferior acetabular/femoral osteophytes, acetabular subchondral cysts, and bone attrition of femoral head was noted. On MRI, cartilage, osteophytes, subchondral cysts, and bone attrition were evaluated in the corresponding locations. Diagnostic performance of radiography was compared with that of MRI, and the area under curve (AUC) was calculated for each pathological feature. Compared with MRI, radiography provided high specificity (0.76-0.90) but variable sensitivity (0.44-0.78) for diffuse cartilage damage (using JSN as an indirect marker), femoral osteophytes, acetabular subchondral cysts and bone attrition of the femoral head, and a low specificity (0.42 and 0.58) for acetabular osteophytes. The AUC of radiography for detecting overall diffuse cartilage damage, marginal osteophytes, subchondral cysts and bone attrition was 0.76, 0.78, 0.67, and 0.82, respectively. Diagnostic performance of radiography is good for bone attrition, fair for marginal osteophytes and cartilage damage, but poor for subchondral cysts. (orig.)

  11. The diagnostic performance of radiography for detection of osteoarthritis-associated features compared with MRI in hip joints with chronic pain

    International Nuclear Information System (INIS)

    Xu, Li; Hayashi, Daichi; Guermazi, Ali; Hunter, David J.; Li, Ling; Winterstein, Anton; Bohndorf, Klaus; Roemer, Frank W.

    2013-01-01

    To evaluate the diagnostic performance of radiography for the detection of MRI-detected osteoarthritis-associated features in various articular subregions of the hip joint. Forty-four patients with chronic hip pain (mean age, 63.3 ± 9.5 years), who were part of the Hip Osteoarthritis MRI Scoring (HOAMS) cohort, underwent both weight-bearing anteroposterior pelvic radiography and 1.5 T MRI. The HOAMS study was a prospective observational study involving 52 subjects, conducted to develop a semiquantitative MRI scoring system for hip osteoarthritis features. In the present study, eight subjects were excluded because of a lack of radiographic assessment. On radiography, the presence of superior and medial joint space narrowing, superior and inferior acetabular/femoral osteophytes, acetabular subchondral cysts, and bone attrition of femoral head was noted. On MRI, cartilage, osteophytes, subchondral cysts, and bone attrition were evaluated in the corresponding locations. Diagnostic performance of radiography was compared with that of MRI, and the area under curve (AUC) was calculated for each pathological feature. Compared with MRI, radiography provided high specificity (0.76-0.90) but variable sensitivity (0.44-0.78) for diffuse cartilage damage (using JSN as an indirect marker), femoral osteophytes, acetabular subchondral cysts and bone attrition of the femoral head, and a low specificity (0.42 and 0.58) for acetabular osteophytes. The AUC of radiography for detecting overall diffuse cartilage damage, marginal osteophytes, subchondral cysts and bone attrition was 0.76, 0.78, 0.67, and 0.82, respectively. Diagnostic performance of radiography is good for bone attrition, fair for marginal osteophytes and cartilage damage, but poor for subchondral cysts. (orig.)

  12. Feature-based RNN target recognition

    Science.gov (United States)

    Bakircioglu, Hakan; Gelenbe, Erol

    1998-09-01

    Detection and recognition of target signatures in sensory data obtained by synthetic aperture radar (SAR), forward- looking infrared, or laser radar, have received considerable attention in the literature. In this paper, we propose a feature based target classification methodology to detect and classify targets in cluttered SAR images, that makes use of selective signature data from sensory data, together with a neural network technique which uses a set of trained networks based on the Random Neural Network (RNN) model (Gelenbe 89, 90, 91, 93) which is trained to act as a matched filter. We propose and investigate radial features of target shapes that are invariant to rotation, translation, and scale, to characterize target and clutter signatures. These features are then used to train a set of learning RNNs which can be used to detect targets within clutter with high accuracy, and to classify the targets or man-made objects from natural clutter. Experimental data from SAR imagery is used to illustrate and validate the proposed method, and to calculate Receiver Operating Characteristics which illustrate the performance of the proposed algorithm.

  13. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features.

    Science.gov (United States)

    Grimm, Lars J; Ghate, Sujata V; Yoon, Sora C; Kuzmiak, Cherie M; Kim, Connie; Mazurowski, Maciej A

    2014-03-01

    The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502-0.739, 95% Confidence Interval: 0.543-0.680,p errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees.

  14. Modulation of Tidal Channel Signatures on SAR Images Over Gyeonggi Bay in Relation to Environmental Factors

    Directory of Open Access Journals (Sweden)

    Tae-Sung Kim

    2018-04-01

    Full Text Available In this study, variations of radar backscatter features of the tidal channel in Gyeonggi Bay in the Eastern Yellow Sea were investigated using spaceborne synthetic aperture radar (SAR images. Consistent quasi-linear bright features appeared on the SAR images. Examining the detailed local bathymetry chart, we found that the features were co-located with the major axis of the tidal channel in the region. It was also shown that modulation of the radar backscatter features changed according to the environmental conditions at the time of imaging. For the statistical analysis, the bathymetric features over the tidal channel were extracted by an objective method. In terms of shape, the extracted features had higher variability in width than in length. The analysis of the variation in intensity with the coinciding bathymetric distribution confirmed that the quasi-linear bright features on the SAR images are fundamentally imprinted due to the surface current convergence and divergence caused by the bathymetry-induced tidal current variation. Furthermore, the contribution of environmental factors to the intensity modulation was quantitatively analyzed. A comparison of the variation in normalized radar cross section (NRCS with tidal current showed a positive correlation only with the perpendicular component of tidal current (r= 0.47. This implies that the modulation in intensity of the tidal channel signatures is mainly affected by the interaction with cross-current flow. On the other hand, the modulation of the NRCS over the tidal channel tended to be degraded as wind speed increased (r= −0.65. Considering the environmental circumstances in the study area, it can be inferred that the imaging capability of SAR for the detection of tidal channel signatures mainly relies on wind speed.

  15. Effects of Per-Pixel Variability on Uncertainties in Bathymetric Retrievals from High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Elizabeth J. Botha

    2016-05-01

    Full Text Available Increased sophistication of high spatial resolution multispectral satellite sensors provides enhanced bathymetric mapping capability. However, the enhancements are counter-acted by per-pixel variability in sunglint, atmospheric path length and directional effects. This case-study highlights retrieval errors from images acquired at non-optimal geometrical combinations. The effects of variations in the environmental noise on water surface reflectance and the accuracy of environmental variable retrievals were quantified. Two WorldView-2 satellite images were acquired, within one minute of each other, with Image 1 placed in a near-optimal sun-sensor geometric configuration and Image 2 placed close to the specular point of the Bidirectional Reflectance Distribution Function (BRDF. Image 2 had higher total environmental noise due to increased surface glint and higher atmospheric path-scattering. Generally, depths were under-estimated from Image 2, compared to Image 1. A partial improvement in retrieval error after glint correction of Image 2 resulted in an increase of the maximum depth to which accurate depth estimations were returned. This case-study indicates that critical analysis of individual images, accounting for the entire sun elevation and azimuth and satellite sensor pointing and geometry as well as anticipated wave height and direction, is required to ensure an image is fit for purpose for aquatic data analysis.

  16. Microarray-based large scale detection of single feature ...

    Indian Academy of Sciences (India)

    2015-12-08

    Dec 8, 2015 ... mental stages was used to identify single feature polymorphisms (SFPs). ... on a high-density oligonucleotide expression array in which. ∗ ..... The sign (+/−) with SFPs indicates direction of polymorphism. In the. (−) sign (i.e. ...

  17. Quantitative assessment of the influence of anatomic noise on the detection of subtle lung nodule in digital chest radiography using fractal-feature distance

    International Nuclear Information System (INIS)

    Imai, Kuniharu; Ikeda, Mitsuru; Enchi, Yukihiro; Niimi, Takanaga

    2008-01-01

    Purpose: To confirm whether or not the influence of anatomic noise on the detection of nodules in digital chest radiography can be evaluated by the fractal-feature distance. Materials and methods: We used the square images with and without a simulated nodule which were generated in our previous observer performance study; the simulated nodule was located on the upper margin of a rib, the inside of a rib, the lower margin of a rib, or the central region between two adjoining ribs. For the square chest images, fractal analysis was conducted using the virtual volume method. The fractal-feature distances between the considered and the reference images were calculated using the pseudo-fractal dimension and complexity, and the square images without the simulated nodule were employed as the reference images. We compared the fractal-feature distances with the observer's confidence level regarding the presence of a nodule in plain chest radiograph. Results: For all square chest images, the relationships between the length of the square boxes and the mean of the virtual volumes were linear on a log-log scale. For all types of the simulated nodules, the fractal-feature distance was the highest for the simulated nodules located on the central region between two adjoining ribs and was the lowest for those located in the inside of a rib. The fractal-feature distance showed a linear relation to an observer's confidence level. Conclusion: The fractal-feature distance would be useful for evaluating the influence of anatomic noise on the detection of nodules in digital chest radiography

  18. Adversarial Feature Selection Against Evasion Attacks.

    Science.gov (United States)

    Zhang, Fei; Chan, Patrick P K; Biggio, Battista; Yeung, Daniel S; Roli, Fabio

    2016-03-01

    Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion, and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed. While previous work has been mainly focused on devising adversary-aware classification algorithms to counter evasion attempts, only few authors have considered the impact of using reduced feature sets on classifier security against the same attacks. An interesting, preliminary result is that classifier security to evasion may be even worsened by the application of feature selection. In this paper, we provide a more detailed investigation of this aspect, shedding some light on the security properties of feature selection against evasion attacks. Inspired by previous work on adversary-aware classifiers, we propose a novel adversary-aware feature selection model that can improve classifier security against evasion attacks, by incorporating specific assumptions on the adversary's data manipulation strategy. We focus on an efficient, wrapper-based implementation of our approach, and experimentally validate its soundness on different application examples, including spam and malware detection.

  19. EOG feature relevance determination for microsleep detection

    OpenAIRE

    Golz Martin; Wollner Sebastian; Sommer David; Schnieder Sebastian

    2017-01-01

    Automatic relevance determination (ARD) was applied to two-channel EOG recordings for microsleep event (MSE) recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC) and logarithmic power spectral densities (PSD) averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ) was used as ARD...

  20. Detecting epileptic seizure with different feature extracting strategies using robust machine learning classification techniques by applying advance parameter optimization approach.

    Science.gov (United States)

    Hussain, Lal

    2018-06-01

    Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.

  1. Ocean Classification of Dynamical Structures Detected by SAR and Spectral Methods

    Science.gov (United States)

    Redondo, J. M.; Martinez-Benjamin, J. J.; Tellez, J. D.; Jorge, J.; Diez, M.; Sekula, E.

    2016-08-01

    We discuss a taxonomy of different dynamical features in the ocean surface and provide some eddy and front statistics, as well as describing some events detected by several satellites and even with additional cruise observations and measurements, in the North-west Mediterranean Sea area between 1996 and 2012. The structure of the flows are presented using self-similar traces that may be used to parametrize mixing at both limits of the Rossby Deformation Radius scale, RL. Results show the ability to identify different SAR signatures and at the same time provide calibrations for the different local configurations of vortices, spirals, Langmuir cells, oil spills and tensioactive slicks that eventually allow the study of the self-similar structure of the turbulence. Depending on the surface wind and wave level, and also on the fetch. the bathimetry, the spiral parameters and the resolution of vortical features change. Previous descriptions did not include the new wind and buoyancy features. SAR images also show the turbulence structure of the coastal area and the Regions of Fresh Water Influence (ROFI). It is noteworthy tt such complex coastal field-dependent behavior is strongly influenced by stratification and rotation of the turbulence spectrum is observed only in the range smaller than the local Rossby deformation radius, RL. The measures of diffusivity from buoy or tracer experiments are used to calibrate the behavior of different tracers and pollutants, both natural and man-made in the NW Mediterranean Sea. Thanks to different polarization and intensity levels in ASAR satellite imagery, these can be used to distinguish between natural and man-made sea surface features due to their distinct self-similar and fractal as a function of spill and slick parameters, environmental conditions and history of both oil releases and weather conditions. Eddy diffusivity map derived from SAR measurements of the ocean surface, performing a feature spatial correlation of the

  2. Development of an algorithm for heartbeats detection and classification in Holter records based on temporal and morphological features

    International Nuclear Information System (INIS)

    García, A; Romano, H; Laciar, E; Correa, R

    2011-01-01

    In this work a detection and classification algorithm for heartbeats analysis in Holter records was developed. First, a QRS complexes detector was implemented and their temporal and morphological characteristics were extracted. A vector was built with these features; this vector is the input of the classification module, based on discriminant analysis. The beats were classified in three groups: Premature Ventricular Contraction beat (PVC), Atrial Premature Contraction beat (APC) and Normal Beat (NB). These beat categories represent the most important groups of commercial Holter systems. The developed algorithms were evaluated in 76 ECG records of two validated open-access databases 'arrhythmias MIT BIH database' and M IT BIH supraventricular arrhythmias database . A total of 166343 beats were detected and analyzed, where the QRS detection algorithm provides a sensitivity of 99.69 % and a positive predictive value of 99.84 %. The classification stage gives sensitivities of 97.17% for NB, 97.67% for PCV and 92.78% for APC.

  3. Predicting the aquatic stage sustainability of a restored backwater channel combining in-situ and airborne remotely sensed bathymetric models.

    Science.gov (United States)

    Jérôme, Lejot; Jérémie, Riquier; Hervé, Piégay

    2014-05-01

    As other large river floodplain worldwide, the floodplain of the Rhône has been deeply altered by human activities and infrastructures over the last centuries both in term of structure and functioning. An ambitious restoration plan of selected by-passed reaches has been implemented since 1999, in order to improve their ecological conditions. One of the main action aimed to increase the aquatic areas in floodplain channels (i.e. secondary channels, backwaters, …). In practice, fine and/or coarse alluvium were dredged, either locally or over the entire cut-off channel length. Sometimes the upstream or downstream alluvial plugs were also removed to reconnect the restored feature to the main channel. Such operation aims to restore forms and associated habitats of biotic communities, which are no more created or maintained by the river itself. In this context, assessing the sustainability of such restoration actions is a major issue. In this study, we focus on 1 of the 24 floodplain channels which have been restored along the Rhône River since 1999, the Malourdie channel (Chautagne reach, France). A monitoring of the geomorphologic evolution of the channel has been conducted during a decade to assess the aquatic stage sustainability of this former fully isolated channel, which has been restored as a backwater in 2004. Two main types of measures were performed: (a) water depth and fine sediment thickness were surveyed with an auger every 10 m along the channel centerline in average every year and a half allowing to establish an exponential decay model of terrestrialization rates through time; (b) three airborne campaigns (2006, 2007, 2012) by Ultra Aerial Vehicle (UAV) provided images from which bathymetry were inferred in combination with observed field measures. Coupling field and airborne models allows us to simulate different states of terrestrialization at the scale of the whole restore feature (e.g. 2020/2030/2050). Raw results indicate that terrestrialization

  4. A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection.

    Science.gov (United States)

    Ceccarelli, Michele; d'Acierno, Antonio; Facchiano, Angelo

    2009-10-15

    Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. peaks) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics. We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962. We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from http://medeaserver.isa.cnr.it/dacierno/spectracode.htm.

  5. Second feature of the matter two-point function

    Science.gov (United States)

    Tansella, Vittorio

    2018-05-01

    We point out the existence of a second feature in the matter two-point function, besides the acoustic peak, due to the baryon-baryon correlation in the early Universe and positioned at twice the distance of the peak. We discuss how the existence of this feature is implied by the well-known heuristic argument that explains the baryon bump in the correlation function. A standard χ2 analysis to estimate the detection significance of the second feature is mimicked. We conclude that, for realistic values of the baryon density, a SKA-like galaxy survey will not be able to detect this feature with standard correlation function analysis.

  6. Limits in feature-based attention to multiple colors.

    Science.gov (United States)

    Liu, Taosheng; Jigo, Michael

    2017-11-01

    Attention to a feature enhances the sensory representation of that feature. Although much has been learned about the properties of attentional modulation when attending to a single feature, the effectiveness of attending to multiple features is not well understood. We investigated this question in a series of experiments using a color-detection task while varying the number of attended colors in a cueing paradigm. Observers were shown either a single cue, two cues, or no cue (baseline) before detecting a coherent color target. We measured detection threshold by varying the coherence level of the target. Compared to the baseline condition, we found consistent facilitation of detection performance in the one-cue and two-cue conditions, but performance in the two-cue condition was lower than that in the one-cue condition. In the final experiment, we presented a 50% valid cue to emulate the situation in which observers were only able to attend a single color in the two-cue condition, and found equivalent detection thresholds with the standard two-cue condition. These results indicate a limit in attending to two colors and further imply that observers could effectively attend a single color at a time. Such a limit is likely due to an inability to maintain multiple active attentional templates for colors.

  7. Detection of hypertensive retinopathy using vessel measurements and textural features.

    Science.gov (United States)

    Agurto, Carla; Joshi, Vinayak; Nemeth, Sheila; Soliz, Peter; Barriga, Simon

    2014-01-01

    Features that indicate hypertensive retinopathy have been well described in the medical literature. This paper presents a new system to automatically classify subjects with hypertensive retinopathy (HR) using digital color fundus images. Our method consists of the following steps: 1) normalization and enhancement of the image; 2) determination of regions of interest based on automatic location of the optic disc; 3) segmentation of the retinal vasculature and measurement of vessel width and tortuosity; 4) extraction of color features; 5) classification of vessel segments as arteries or veins; 6) calculation of artery-vein ratios using the six widest (major) vessels for each category; 7) calculation of mean red intensity and saturation values for all arteries; 8) calculation of amplitude-modulation frequency-modulation (AM-FM) features for entire image; and 9) classification of features into HR and non-HR using linear regression. This approach was tested on 74 digital color fundus photographs taken with TOPCON and CANON retinal cameras using leave-one out cross validation. An area under the ROC curve (AUC) of 0.84 was achieved with sensitivity and specificity of 90% and 67%, respectively.

  8. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents.

    Science.gov (United States)

    Zhang, Jing; Lo, Joseph Y; Kuzmiak, Cherie M; Ghate, Sujata V; Yoon, Sora C; Mazurowski, Maciej A

    2014-09-01

    Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. The authors' algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (perror

  9. Diagnostic performance of 3D standing CT imaging for detection of knee osteoarthritis features.

    Science.gov (United States)

    Segal, Neil A; Nevitt, Michael C; Lynch, John A; Niu, Jingbo; Torner, James C; Guermazi, Ali

    2015-07-01

    To determine the diagnostic performance of standing computerized tomography (SCT) of the knee for osteophytes and subchondral cysts compared with fixed-flexion radiography, using MRI as the reference standard. Twenty participants were recruited from the Multicenter Osteoarthritis Study. Participants' knees were imaged with SCT while standing in a knee-positioning frame, and with postero-anterior fixed-flexion radiography and 1T MRI. Medial and lateral marginal osteophytes and subchondral cysts were scored on bilateral radiographs and coronal SCT images using the OARSI grading system and on coronal MRI using Whole Organ MRI Scoring. Imaging modalities were read separately with images in random order. Sensitivity, specificity and accuracy for the detection of lesions were calculated and differences between modalities were tested using McNemar's test. Participants' mean age was 66.8 years, body mass index was 29.6 kg/m(2) and 50% were women. Of the 160 surfaces (medial and lateral femur and tibia for 40 knees), MRI revealed 84 osteophytes and 10 subchondral cysts. In comparison with osteophytes and subchondral cysts detected by MRI, SCT was significantly more sensitive (93 and 100%; p osteophytes) than plain radiographs (sensitivity 60 and 10% and accuracy 79 and 94%, respectively). For osteophytes, differences in sensitivity and accuracy were greatest at the medial femur (p = 0.002). In comparison with MRI, SCT imaging was more sensitive and accurate for detection of osteophytes and subchondral cysts than conventional fixed-flexion radiography. Additional study is warranted to assess diagnostic performance of SCT measures of joint space width, progression of OA features and the patellofemoral joint.

  10. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jing, E-mail: jing.zhang2@duke.edu; Ghate, Sujata V.; Yoon, Sora C. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Lo, Joseph Y. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Departments of Biomedical Engineering and Electrical and Computer Engineering, Duke University, Durham, North Carolina 27705 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Kuzmiak, Cherie M. [Department of Radiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina 27599 (United States); Mazurowski, Maciej A. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  11. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    International Nuclear Information System (INIS)

    Zhang, Jing; Ghate, Sujata V.; Yoon, Sora C.; Lo, Joseph Y.; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-01-01

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  12. Detection of Abnormal Events via Optical Flow Feature Analysis

    Directory of Open Access Journals (Sweden)

    Tian Wang

    2015-03-01

    Full Text Available In this paper, a novel algorithm is proposed to detect abnormal events in video streams. The algorithm is based on the histogram of the optical flow orientation descriptor and the classification method. The details of the histogram of the optical flow orientation descriptor are illustrated for describing movement information of the global video frame or foreground frame. By combining one-class support vector machine and kernel principal component analysis methods, the abnormal events in the current frame can be detected after a learning period characterizing normal behaviors. The difference abnormal detection results are analyzed and explained. The proposed detection method is tested on benchmark datasets, then the experimental results show the effectiveness of the algorithm.

  13. Detection of Abnormal Events via Optical Flow Feature Analysis

    Science.gov (United States)

    Wang, Tian; Snoussi, Hichem

    2015-01-01

    In this paper, a novel algorithm is proposed to detect abnormal events in video streams. The algorithm is based on the histogram of the optical flow orientation descriptor and the classification method. The details of the histogram of the optical flow orientation descriptor are illustrated for describing movement information of the global video frame or foreground frame. By combining one-class support vector machine and kernel principal component analysis methods, the abnormal events in the current frame can be detected after a learning period characterizing normal behaviors. The difference abnormal detection results are analyzed and explained. The proposed detection method is tested on benchmark datasets, then the experimental results show the effectiveness of the algorithm. PMID:25811227

  14. A Quantum Hybrid PSO Combined with Fuzzy k-NN Approach to Feature Selection and Cell Classification in Cervical Cancer Detection

    Directory of Open Access Journals (Sweden)

    Abdullah M. Iliyasu

    2017-12-01

    Full Text Available A quantum hybrid (QH intelligent approach that blends the adaptive search capability of the quantum-behaved particle swarm optimisation (QPSO method with the intuitionistic rationality of traditional fuzzy k-nearest neighbours (Fuzzy k-NN algorithm (known simply as the Q-Fuzzy approach is proposed for efficient feature selection and classification of cells in cervical smeared (CS images. From an initial multitude of 17 features describing the geometry, colour, and texture of the CS images, the QPSO stage of our proposed technique is used to select the best subset features (i.e., global best particles that represent a pruned down collection of seven features. Using a dataset of almost 1000 images, performance evaluation of our proposed Q-Fuzzy approach assesses the impact of our feature selection on classification accuracy by way of three experimental scenarios that are compared alongside two other approaches: the All-features (i.e., classification without prior feature selection and another hybrid technique combining the standard PSO algorithm with the Fuzzy k-NN technique (P-Fuzzy approach. In the first and second scenarios, we further divided the assessment criteria in terms of classification accuracy based on the choice of best features and those in terms of the different categories of the cervical cells. In the third scenario, we introduced new QH hybrid techniques, i.e., QPSO combined with other supervised learning methods, and compared the classification accuracy alongside our proposed Q-Fuzzy approach. Furthermore, we employed statistical approaches to establish qualitative agreement with regards to the feature selection in the experimental scenarios 1 and 3. The synergy between the QPSO and Fuzzy k-NN in the proposed Q-Fuzzy approach improves classification accuracy as manifest in the reduction in number cell features, which is crucial for effective cervical cancer detection and diagnosis.

  15. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning

    Science.gov (United States)

    Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George

    2018-06-01

    Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional

  16. Machine Fault Detection Based on Filter Bank Similarity Features Using Acoustic and Vibration Analysis

    Directory of Open Access Journals (Sweden)

    Mauricio Holguín-Londoño

    2016-01-01

    Full Text Available Vibration and acoustic analysis actively support the nondestructive and noninvasive fault diagnostics of rotating machines at early stages. Nonetheless, the acoustic signal is less used because of its vulnerability to external interferences, hindering an efficient and robust analysis for condition monitoring (CM. This paper presents a novel methodology to characterize different failure signatures from rotating machines using either acoustic or vibration signals. Firstly, the signal is decomposed into several narrow-band spectral components applying different filter bank methods such as empirical mode decomposition, wavelet packet transform, and Fourier-based filtering. Secondly, a feature set is built using a proposed similarity measure termed cumulative spectral density index and used to estimate the mutual statistical dependence between each bandwidth-limited component and the raw signal. Finally, a classification scheme is carried out to distinguish the different types of faults. The methodology is tested in two laboratory experiments, including turbine blade degradation and rolling element bearing faults. The robustness of our approach is validated contaminating the signal with several levels of additive white Gaussian noise, obtaining high-performance outcomes that make the usage of vibration, acoustic, and vibroacoustic measurements in different applications comparable. As a result, the proposed fault detection based on filter bank similarity features is a promising methodology to implement in CM of rotating machinery, even using measurements with low signal-to-noise ratio.

  17. Iris recognition using possibilistic fuzzy matching on local features.

    Science.gov (United States)

    Tsai, Chung-Chih; Lin, Heng-Yi; Taur, Jinshiuh; Tao, Chin-Wang

    2012-02-01

    In this paper, we propose a novel possibilistic fuzzy matching strategy with invariant properties, which can provide a robust and effective matching scheme for two sets of iris feature points. In addition, the nonlinear normalization model is adopted to provide more accurate position before matching. Moreover, an effective iris segmentation method is proposed to refine the detected inner and outer boundaries to smooth curves. For feature extraction, the Gabor filters are adopted to detect the local feature points from the segmented iris image in the Cartesian coordinate system and to generate a rotation-invariant descriptor for each detected point. After that, the proposed matching algorithm is used to compute a similarity score for two sets of feature points from a pair of iris images. The experimental results show that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems.

  18. Feature Detection, Characterization and Confirmation Methodology: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Karasaki, Kenzi; Apps, John; Doughty, Christine; Gwatney, Hope; Onishi, Celia Tiemi; Trautz, Robert; Tsang, Chin-Fu

    2007-03-01

    This is the final report of the NUMO-LBNL collaborative project: Feature Detection, Characterization and Confirmation Methodology under NUMO-DOE/LBNL collaboration agreement, the task description of which can be found in the Appendix. We examine site characterization projects from several sites in the world. The list includes Yucca Mountain in the USA, Tono and Horonobe in Japan, AECL in Canada, sites in Sweden, and Olkiluoto in Finland. We identify important geologic features and parameters common to most (or all) sites to provide useful information for future repository siting activity. At first glance, one could question whether there was any commonality among the sites, which are in different rock types at different locations. For example, the planned Yucca Mountain site is a dry repository in unsaturated tuff, whereas the Swedish sites are situated in saturated granite. However, the study concludes that indeed there are a number of important common features and parameters among all the sites--namely, (1) fault properties, (2) fracture-matrix interaction (3) groundwater flux, (4) boundary conditions, and (5) the permeability and porosity of the materials. We list the lessons learned from the Yucca Mountain Project and other site characterization programs. Most programs have by and large been quite successful. Nonetheless, there are definitely 'should-haves' and 'could-haves', or lessons to be learned, in all these programs. Although each site characterization program has some unique aspects, we believe that these crosscutting lessons can be very useful for future site investigations to be conducted in Japan. One of the most common lessons learned is that a repository program should allow for flexibility, in both schedule and approach. We examine field investigation technologies used to collect site characterization data in the field. An extensive list of existing field technologies is presented, with some discussion on usage and limitations

  19. Feature Detection, Characterization and Confirmation Methodology: Final Report

    International Nuclear Information System (INIS)

    Karasaki, Kenzi; Apps, John; Doughty, Christine; Gwatney, Hope; Onishi, Celia Tiemi; Trautz, Robert; Tsang, Chin-Fu

    2007-01-01

    This is the final report of the NUMO-LBNL collaborative project: Feature Detection, Characterization and Confirmation Methodology under NUMO-DOE/LBNL collaboration agreement, the task description of which can be found in the Appendix. We examine site characterization projects from several sites in the world. The list includes Yucca Mountain in the USA, Tono and Horonobe in Japan, AECL in Canada, sites in Sweden, and Olkiluoto in Finland. We identify important geologic features and parameters common to most (or all) sites to provide useful information for future repository siting activity. At first glance, one could question whether there was any commonality among the sites, which are in different rock types at different locations. For example, the planned Yucca Mountain site is a dry repository in unsaturated tuff, whereas the Swedish sites are situated in saturated granite. However, the study concludes that indeed there are a number of important common features and parameters among all the sites--namely, (1) fault properties, (2) fracture-matrix interaction (3) groundwater flux, (4) boundary conditions, and (5) the permeability and porosity of the materials. We list the lessons learned from the Yucca Mountain Project and other site characterization programs. Most programs have by and large been quite successful. Nonetheless, there are definitely 'should-haves' and 'could-haves', or lessons to be learned, in all these programs. Although each site characterization program has some unique aspects, we believe that these crosscutting lessons can be very useful for future site investigations to be conducted in Japan. One of the most common lessons learned is that a repository program should allow for flexibility, in both schedule and approach. We examine field investigation technologies used to collect site characterization data in the field. An extensive list of existing field technologies is presented, with some discussion on usage and limitations. Many of the

  20. A Robust Motion Artifact Detection Algorithm for Accurate Detection of Heart Rates From Photoplethysmographic Signals Using Time-Frequency Spectral Features.

    Science.gov (United States)

    Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H

    2017-09-01

    Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach

  1. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    Science.gov (United States)

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  2. LSNR Airborne LIDAR Mapping System Design and Early Results (Invited)

    Science.gov (United States)

    Shrestha, K.; Carter, W. E.; Slatton, K. C.

    2009-12-01

    Low signal-to-noise ratio (LSNR) detection techniques allow for implementation of airborne light detection and range (LIDAR) instrumentation aboard platforms with prohibitive power, size, and weight restrictions. The University of Florida has developed the Coastal Area Tactical-mapping System (CATS), a prototype LSNR LIDAR system capable of single photon laser ranging. CATS is designed to operate in a fixed-wing aircraft flying 600 m above ground level, producing 532 nm, 480 ps, 3 μJ output pulses at 8 kHz. To achieve continuous coverage of the terrain with 20 cm spatial resolution in a single pass, a 10x10 array of laser beamlets is scanned. A Risley prism scanner (two rotating V-coated optical wedges) allows the array of laser beamlets to be deflected in a variety of patterns, including conical, spiral, and lines at selected angles to the direction of flight. Backscattered laser photons are imaged onto a 100 channel (10x10 segmented-anode) photomultiplier tube (PMT) with a micro-channel plate (MCP) amplifier. Each channel of the PMT is connected to a multi-stop 2 GHz event timer. Here we report on tests in which ranges for known targets were accumulated for repeated laser shots and statistical analyses were applied to evaluate range accuracy, minimum separation distance, bathymetric mapping depth, and atmospheric scattering. Ground-based field test results have yielded 10 cm range accuracy and sub-meter feature identification at variable scan settings. These experiments also show that a secondary surface can be detected at a distance of 15 cm from the first. Range errors in secondary surface identification for six separate trials were within 7.5 cm, or within the timing resolution limit of the system. Operating at multi-photon sensitivity may have value for situations in which high ambient noise precludes single-photon sensitivity. Low reflectivity targets submerged in highly turbid waters can cause detection issues. CATS offers the capability to adjust the

  3. Research on oral test modeling based on multi-feature fusion

    Science.gov (United States)

    Shi, Yuliang; Tao, Yiyue; Lei, Jun

    2018-04-01

    In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.

  4. Historical bathymetry and bathymetric change in the Mississippi-Alabama coastal region, 1847-2009

    Science.gov (United States)

    Buster, Noreen A.; Morton, Robert A.

    2011-01-01

    Land loss and seafloor change around the Mississippi and Alabama (MS-AL) barrier islands are of great concern to the public and to local, state, and federal agencies. The islands provide wildlife protected areas and recreational land, and they serve as a natural first line of defense for the mainland against storm activity (index map on poster). Principal physical conditions that drive morphological seafloor and coastal change in this area include decreased sediment supply, sea-level rise, storms, and human activities (Otvos, 1970; Byrnes and others, 1991; Morton and others, 2004; Morton, 2008). Seafloor responses to the same processes can also affect the entire coastal zone. Sediment eroded from the barrier islands is entrained in the littoral system, where it is redistributed by alongshore currents. Wave and current activity is partially controlled by the profile of the seafloor, and this interdependency along with natural and anthropogenic influences has significant effects on nearshore environments. When a coastal system is altered by human activity such as dredging, as is the case of the MS-AL coastal region, the natural state and processes are altered, and alongshore sediment transport can be disrupted. As a result of deeply dredged channels, adjacent island migration is blocked, nearshore environments downdrift in the littoral system become sediment starved, and sedimentation around the channels is modified. Sediment deposition and erosion are reflected through seafloor evolution. In a rapidly changing coastal environment, understanding historically where and why changes are occurring is essential. To better assess the comprehensive dynamics of the MS-AL coastal zone, a 160-year evaluation of the bathymetry and bathymetric change of the region was conducted.

  5. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  6. Improving EEG signal peak detection using feature weight learning ...

    Indian Academy of Sciences (India)

    Asrul Adam

    4 School of Psychology and Counseling, Queensland University of Technology, Brisbane 4000, Australia. 5 QIMR ... The groups of Acir et al .... difference between the peak and the floating mean, which is ..... Thus, the individual features were.

  7. Localized thin-section CT with radiomics feature extraction and machine learning to classify early-detected pulmonary nodules from lung cancer screening

    Science.gov (United States)

    Tu, Shu-Ju; Wang, Chih-Wei; Pan, Kuang-Tse; Wu, Yi-Cheng; Wu, Chen-Te

    2018-03-01

    Lung cancer screening aims to detect small pulmonary nodules and decrease the mortality rate of those affected. However, studies from large-scale clinical trials of lung cancer screening have shown that the false-positive rate is high and positive predictive value is low. To address these problems, a technical approach is greatly needed for accurate malignancy differentiation among these early-detected nodules. We studied the clinical feasibility of an additional protocol of localized thin-section CT for further assessment on recalled patients from lung cancer screening tests. Our approach of localized thin-section CT was integrated with radiomics features extraction and machine learning classification which was supervised by pathological diagnosis. Localized thin-section CT images of 122 nodules were retrospectively reviewed and 374 radiomics features were extracted. In this study, 48 nodules were benign and 74 malignant. There were nine patients with multiple nodules and four with synchronous multiple malignant nodules. Different machine learning classifiers with a stratified ten-fold cross-validation were used and repeated 100 times to evaluate classification accuracy. Of the image features extracted from the thin-section CT images, 238 (64%) were useful in differentiating between benign and malignant nodules. These useful features include CT density (p  =  0.002 518), sigma (p  =  0.002 781), uniformity (p  =  0.032 41), and entropy (p  =  0.006 685). The highest classification accuracy was 79% by the logistic classifier. The performance metrics of this logistic classification model was 0.80 for the positive predictive value, 0.36 for the false-positive rate, and 0.80 for the area under the receiver operating characteristic curve. Our approach of direct risk classification supervised by the pathological diagnosis with localized thin-section CT and radiomics feature extraction may support clinical physicians in determining

  8. Hypothesis testing for differentially correlated features.

    Science.gov (United States)

    Sheng, Elisa; Witten, Daniela; Zhou, Xiao-Hua

    2016-10-01

    In a multivariate setting, we consider the task of identifying features whose correlations with the other features differ across conditions. Such correlation shifts may occur independently of mean shifts, or differences in the means of the individual features across conditions. Previous approaches for detecting correlation shifts consider features simultaneously, by computing a correlation-based test statistic for each feature. However, since correlations involve two features, such approaches do not lend themselves to identifying which feature is the culprit. In this article, we instead consider a serial testing approach, by comparing columns of the sample correlation matrix across two conditions, and removing one feature at a time. Our method provides a novel perspective and favorable empirical results compared with competing approaches. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Chromatic Information and Feature Detection in Fast Visual Analysis.

    Directory of Open Access Journals (Sweden)

    Maria M Del Viva

    Full Text Available The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-and-white movies provide compelling representations of real world scenes. Also, the contrast sensitivity of color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. We conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.

  10. Feature-Based Change Detection Reveals Inconsistent Individual Differences in Visual Working Memory Capacity.

    Science.gov (United States)

    Ambrose, Joseph P; Wijeakumar, Sobanawartiny; Buss, Aaron T; Spencer, John P

    2016-01-01

    Visual working memory (VWM) is a key cognitive system that enables people to hold visual information in mind after a stimulus has been removed and compare past and present to detect changes that have occurred. VWM is severely capacity limited to around 3-4 items, although there are robust individual differences in this limit. Importantly, these individual differences are evident in neural measures of VWM capacity. Here, we capitalized on recent work showing that capacity is lower for more complex stimulus dimension. In particular, we asked whether individual differences in capacity remain consistent if capacity is shifted by a more demanding task, and, further, whether the correspondence between behavioral and neural measures holds across a shift in VWM capacity. Participants completed a change detection (CD) task with simple colors and complex shapes in an fMRI experiment. As expected, capacity was significantly lower for the shape dimension. Moreover, there were robust individual differences in behavioral estimates of VWM capacity across dimensions. Similarly, participants with a stronger BOLD response for color also showed a strong neural response for shape within the lateral occipital cortex, intraparietal sulcus (IPS), and superior IPS. Although there were robust individual differences in the behavioral and neural measures, we found little evidence of systematic brain-behavior correlations across feature dimensions. This suggests that behavioral and neural measures of capacity provide different views onto the processes that underlie VWM and CD. Recent theoretical approaches that attempt to bridge between behavioral and neural measures are well positioned to address these findings in future work.

  11. Tumor recognition in wireless capsule endoscopy images using textural features and SVM-based feature selection.

    Science.gov (United States)

    Li, Baopu; Meng, Max Q-H

    2012-05-01

    Tumor in digestive tract is a common disease and wireless capsule endoscopy (WCE) is a relatively new technology to examine diseases for digestive tract especially for small intestine. This paper addresses the problem of automatic recognition of tumor for WCE images. Candidate color texture feature that integrates uniform local binary pattern and wavelet is proposed to characterize WCE images. The proposed features are invariant to illumination change and describe multiresolution characteristics of WCE images. Two feature selection approaches based on support vector machine, sequential forward floating selection and recursive feature elimination, are further employed to refine the proposed features for improving the detection accuracy. Extensive experiments validate that the proposed computer-aided diagnosis system achieves a promising tumor recognition accuracy of 92.4% in WCE images on our collected data.

  12. A keyword spotting model using perceptually significant energy features

    Science.gov (United States)

    Umakanthan, Padmalochini

    The task of a keyword recognition system is to detect the presence of certain words in a conversation based on the linguistic information present in human speech. Such keyword spotting systems have applications in homeland security, telephone surveillance and human-computer interfacing. General procedure of a keyword spotting system involves feature generation and matching. In this work, new set of features that are based on the psycho-acoustic masking nature of human speech are proposed. After developing these features a time aligned pattern matching process was implemented to locate the words in a set of unknown words. A word boundary detection technique based on frame classification using the nonlinear characteristics of speech is also addressed in this work. Validation of this keyword spotting model was done using widely acclaimed Cepstral features. The experimental results indicate the viability of using these perceptually significant features as an augmented feature set in keyword spotting.

  13. A Feature-Free 30-Disease Pathological Brain Detection System by Linear Regression Classifier.

    Science.gov (United States)

    Chen, Yi; Shao, Ying; Yan, Jie; Yuan, Ti-Fei; Qu, Yanwen; Lee, Elizabeth; Wang, Shuihua

    2017-01-01

    Alzheimer's disease patients are increasing rapidly every year. Scholars tend to use computer vision methods to develop automatic diagnosis system. (Background) In 2015, Gorji et al. proposed a novel method using pseudo Zernike moment. They tested four classifiers: learning vector quantization neural network, pattern recognition neural network trained by Levenberg-Marquardt, by resilient backpropagation, and by scaled conjugate gradient. This study presents an improved method by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Therefore, it can be used to detect Alzheimer's disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Multithreaded hybrid feature tracking for markerless augmented reality.

    Science.gov (United States)

    Lee, Taehee; Höllerer, Tobias

    2009-01-01

    We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.

  15. Tianma 65-m telescope detection of new OH maser features towards the water fountain source IRAS 18286-0959

    Science.gov (United States)

    Chen, Xi; Shen, Zhi-Qiang; Li, Xiao-Qiong; Yang, Kai; Nakashima, Jun-ichi; Wu, Ya-Jun; Zhao, Rong-Bin; Li, Juan; Wang, Jun-Zhi; Jiang, Dong-Rong; Wang, Jin-Qing; Li, Bin; Zhong, Wei-Ye; Yung, Bosco H. K.

    2017-07-01

    We report the results of the OH maser observation towards the water fountain source IRAS 18286-0959 using the newly built Shanghai Tianma 65-m Radio Telescope. We observed the three OH ground state transition lines at frequencies of 1612, 1665 and 1667 MHz. Comparing with the spectra of previous observations, we find new maser spectral components at velocity channels largely shifted from the systemic velocity: the velocity offsets of the newly found components lie in the range 20-40 km s-1 with respect to the systemic velocity. Besides maser variability, another possible interpretation for the newly detected maser features is that part of the molecular gas in the circumstellar envelope is accelerated. The acceleration is probably caused by the passage of a high-velocity molecular jet, which has been detected in previous Very Long Baseline Interferometry observations in the H2O maser line.

  16. Geomorphic and hydraulic controls on large-scale riverbank failure on a mixed bedrock-alluvial river system, the River Murray, South Australia: a bathymetric analysis.

    Science.gov (United States)

    De Carli, E.; Hubble, T.

    2014-12-01

    During the peak of the Millennium Drought (1997-2010) pool-levels in the lower River Murray in South Australia dropped 1.5 metres below sea level, resulting in large-scale mass failure of the alluvial banks. The largest of these failures occurred without signs of prior instability at Long Island Marina whereby a 270 metre length of populated and vegetated riverbank collapsed in a series of rotational failures. Analysis of long-reach bathymetric surveys of the river channel revealed a strong relationship between geomorphic and hydraulic controls on channel width and downstream alluvial failure. As the entrenched channel planform meanders within and encroaches upon its bedrock valley confines the channel width is 'pinched' and decreases by up to half, resulting in a deepening thalweg and channel bed incision. The authors posit that flow and shear velocities increase at these geomorphically controlled 'pinch-points' resulting in complex and variable hydraulic patterns such as erosional scour eddies, which act to scour the toe of the slope over-steepening and destabilising the alluvial margins. Analysis of bathymetric datasets between 2009 and 2014 revealed signs of active incision and erosional scour of the channel bed. This is counter to conceptual models which deem the backwater zone of a river to be one of decelerating flow and thus sediment deposition. Complex and variable flow patterns have been observed in other mixed alluvial-bedrock river systems, and signs of active incision observed in the backwater zone of the Mississippi River, United States. The incision and widening of the lower Murray River suggests the channel is in an erosional phase of channel readjustment which has implications for riverbank collapse on the alluvial margins. The prevention of seawater ingress due to barrage construction at the Murray mouth and Southern Ocean confluence, allowed pool-levels to drop significantly during the Millennium Drought reducing lateral confining support to the

  17. USING COMBINATION OF PLANAR AND HEIGHT FEATURES FOR DETECTING BUILT-UP AREAS FROM HIGH-RESOLUTION STEREO IMAGERY

    Directory of Open Access Journals (Sweden)

    F. Peng

    2017-09-01

    Full Text Available Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM. Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3 can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM and digital orthophoto map (DOM are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data. The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  18. Using Combination of Planar and Height Features for Detecting Built-Up Areas from High-Resolution Stereo Imagery

    Science.gov (United States)

    Peng, F.; Cai, X.; Tan, W.

    2017-09-01

    Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM). Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3) can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM) and digital orthophoto map (DOM) are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings) are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data). The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  19. Leukemia and colon tumor detection based on microarray data classification using momentum backpropagation and genetic algorithm as a feature selection method

    Science.gov (United States)

    Wisesty, Untari N.; Warastri, Riris S.; Puspitasari, Shinta Y.

    2018-03-01

    Cancer is one of the major causes of mordibility and mortality problems in the worldwide. Therefore, the need of a system that can analyze and identify a person suffering from a cancer by using microarray data derived from the patient’s Deoxyribonucleic Acid (DNA). But on microarray data has thousands of attributes, thus making the challenges in data processing. This is often referred to as the curse of dimensionality. Therefore, in this study built a system capable of detecting a patient whether contracted cancer or not. The algorithm used is Genetic Algorithm as feature selection and Momentum Backpropagation Neural Network as a classification method, with data used from the Kent Ridge Bio-medical Dataset. Based on system testing that has been done, the system can detect Leukemia and Colon Tumor with best accuracy equal to 98.33% for colon tumor data and 100% for leukimia data. Genetic Algorithm as feature selection algorithm can improve system accuracy, which is from 64.52% to 98.33% for colon tumor data and 65.28% to 100% for leukemia data, and the use of momentum parameters can accelerate the convergence of the system in the training process of Neural Network.

  20. Bathymetric and sediment facies maps for China Bend and Marcus Flats, Franklin D. Roosevelt Lake, Washington, 2008 and 2009

    Science.gov (United States)

    Weakland, Rhonda J.; Fosness, Ryan L.; Williams, Marshall L.; Barton, Gary J.

    2011-01-01

    The U.S. Geological Survey (USGS) created bathymetric and sediment facies maps for portions of two reaches of Lake Roosevelt in support of an interdisciplinary study of white sturgeon (Acipenser transmontanus) and their habitat areas within Franklin D. Roosevelt Lake, Washington. In October 2008, scientists from the USGS used a boat-mounted multibeam echo sounder (MBES) to describe bathymetric data to characterize surface relief at China Bend and Marcus Flats, between Northport and Kettle Falls, Washington. In March 2009, an underwater video camera was used to view and record sediment facies that were then characterized by sediment type, grain size, and areas of sand deposition. Smelter slag has been identified as having the characteristics of sand-sized black particles; the two non-invasive surveys attempted to identify areas containing black-colored particulate matter that may be elements and minerals, organic material, or slag. The white sturgeon population in Lake Roosevelt is threatened by the failure of natural recruitment, resulting in a native population that consists primarily of aging fish and that is gradually declining as fish die and are not replaced by nonhatchery reared juvenile fish. These fish spawn and rear in the riverine and upper reservoir reaches where smelter slag is present in the sediment of the river lake bed. Effects of slag on the white sturgeon population in Lake Roosevelt are largely unknown. Two recent studies demonstrated that copper and other metals are mobilized from slag in aqueous environments with concentrations of copper and zinc in bed sediments reaching levels of 10,000 and 30,000 mg/kg due to the presence of smelter slag. Copper was found to be highly toxic to 30-day-old white sturgeon with 96-h LC50 concentrations ranging from 3 to 5 (u or mu)g copper per liter. Older juvenile and adult sturgeons commonly ingest substantial amounts of sediment while foraging. Future study efforts in Lake Roosevelt should include sampling of

  1. Novel images extraction model using improved delay vector variance feature extraction and multi-kernel neural network for EEG detection and prediction.

    Science.gov (United States)

    Ge, Jing; Zhang, Guoping

    2015-01-01

    Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.

  2. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    Science.gov (United States)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  3. Infants' Developing Sensitivity to Object Function: Attention to Features and Feature Correlations

    Science.gov (United States)

    Baumgartner, Heidi A.; Oakes, Lisa M.

    2011-01-01

    When learning object function, infants must detect relations among features--for example, that squeezing is associated with squeaking or that objects with wheels roll. Previously, Perone and Oakes (2006) found 10-month-old infants were sensitive to relations between object appearances and actions, but not to relations between appearances and…

  4. High-resolution bathymetry as a primary exploration tool for seafloor massive sulfide deposits - lessons learned from exploration on the Mid-Atlantic and Juan de Fuca Ridges, and northern Lau Basin

    Science.gov (United States)

    Jamieson, J. W.; Clague, D. A.; Petersen, S.; Yeo, I. A.; Escartin, J.; Kwasnitschka, T.

    2016-12-01

    High-resolution, autonomous underwater vehicle (AUV)-derived multibeam bathymetry is increasingly being used as an exploration tool for delineating the size and extent of hydrothermal vent fields and associated seafloor massive sulfide deposits. However, because of the limited amount of seafloor that can be surveyed during a single dive, and the challenges associated with distinguishing hydrothermal chimneys and mounds from other volcanic and tectonic features using solely bathymetric data, AUV mapping surveys have largely been employed as a secondary exploration tool once hydrothermal sites have been discovered using other exploration methods such as plume, self-potential and TV surveys, or ROV and submersible dives. Visual ground-truthing is often required to attain an acceptable level of confidence in the hydrothermal origin of features identified in AUV-derived bathymetry. Here, we present examples of high-resolution bathymetric surveys of vent fields from a variety of tectonic environments, including slow- and intermediate-rate mid-ocean ridges, oceanic core complexes and back arc basins. Results illustrate the diversity of sulfide deposit morphologies, and the challenges associated with identifying hydrothermal features in different tectonic environments. We present a developing set of criteria that can be used to distinguish hydrothermal deposits in bathymetric data, and how AUV surveys can be used either on their own or in conjunction with other exploration techniques as a primary exploration tool.

  5. Image feature detectors and descriptors foundations and applications

    CERN Document Server

    Hassaballah, Mahmoud

    2016-01-01

    This book provides readers with a selection of high-quality chapters that cover both theoretical concepts and practical applications of image feature detectors and descriptors. It serves as reference for researchers and practitioners by featuring survey chapters and research contributions on image feature detectors and descriptors. Additionally, it emphasizes several keywords in both theoretical and practical aspects of image feature extraction. The keywords include acceleration of feature detection and extraction, hardware implantations, image segmentation, evolutionary algorithm, ordinal measures, as well as visual speech recognition. .

  6. Improving EEG signal peak detection using feature weight learning ...

    Indian Academy of Sciences (India)

    Therefore, we aimed to develop a general procedure for eye event-related applications based on feature weight learning (FWL), through the use of a neural network with random weights (NNRW) as the classifier. The FWL is performed using a particle swarm optimization algorithm, applied to the well-studied Dumpala, Acir, ...

  7. Possible Detection of an Emission Cyclotron Resonance Scattering Feature from the Accretion-Powered Pulsar 4U 1626-67

    Science.gov (United States)

    Iwakiri, W. B.; Terada, Y.; Tashiro, M. S.; Mihara, T.; Angelini, L.; Yamada, S.; Enoto, T.; Makishima, K.; Nakajima, M.; Yoshida, A.

    2012-01-01

    We present analysis of 4U 1626-67, a 7.7 s pulsar in a low-mass X-ray binary system, observed with the hard X-ray detector of the Japanese X-ray satellite Suzaku in 2006 March for a net exposure of 88 ks. The source was detected at an average 10-60 keY flux of approx 4 x 10-10 erg / sq cm/ s. The phase-averaged spectrum is reproduced well by combining a negative and positive power-law times exponential cutoff (NPEX) model modified at approx 37 keY by a cyclotron resonance scattering feature (CRSF). The phase-resolved analysis shows that the spectra at the bright phases are well fit by the NPEX with CRSF model. On the other hand. the spectrum in the dim phase lacks the NPEX high-energy cutoff component, and the CRSF can be reproduced by either an emission or an absorption profile. When fitting the dim phase spectrum with the NPEX plus Gaussian model. we find that the feature is better described in terms of an emission rather than an absorption profile. The statistical significance of this result, evaluated by means of an F test, is between 2.91 x 10(exp -3) and 1.53 x 10(exp -5), taking into account the systematic errors in the background evaluation of HXD-PIN. We find that the emission profile is more feasible than the absorption one for comparing the physical parameters in other phases. Therefore, we have possibly detected an emission line at the cyclotron resonance energy in the dim phase.

  8. Feature selection for anomaly–based network intrusion detection using cluster validity indices

    CSIR Research Space (South Africa)

    Naidoo, Tyrone

    2015-09-01

    Full Text Available data, which is rarely available in operational networks. It uses normalized cluster validity indices as an objective function that is optimized over the search space of candidate feature subsets via a genetic algorithm. Feature sets produced...

  9. 2006 NOAA Bathymetric Lidar: Puerto Rico (Southwest)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set (Project Number OPR-I305-KRL-06) depicts depth values (mean 5 meter gridded) collected using LiDAR (Light Detection & Ranging) from the shoreline...

  10. Information Processing Features Can Detect Behavioral Regimes of Dynamical Systems

    Directory of Open Access Journals (Sweden)

    Rick Quax

    2018-01-01

    Full Text Available In dynamical systems, local interactions between dynamical units generate correlations which are stored and transmitted throughout the system, generating the macroscopic behavior. However a framework to quantify exactly how these correlations are stored, transmitted, and combined at the microscopic scale is missing. Here we propose to characterize the notion of “information processing” based on all possible Shannon mutual information quantities between a future state and all possible sets of initial states. We apply it to the 256 elementary cellular automata (ECA, which are the simplest possible dynamical systems exhibiting behaviors ranging from simple to complex. Our main finding is that only a few information features are needed for full predictability of the systemic behavior and that the “information synergy” feature is always most predictive. Finally we apply the idea to foreign exchange (FX and interest-rate swap (IRS time-series data. We find an effective “slowing down” leading indicator in all three markets for the 2008 financial crisis when applied to the information features, as opposed to using the data itself directly. Our work suggests that the proposed characterization of the local information processing of units may be a promising direction for predicting emergent systemic behaviors.

  11. Real-Time Detection and Measurement of Eye Features from Color Images

    Directory of Open Access Journals (Sweden)

    Diana Borza

    2016-07-01

    Full Text Available The accurate extraction and measurement of eye features is crucial to a variety of domains, including human-computer interaction, biometry, and medical research. This paper presents a fast and accurate method for extracting multiple features around the eyes: the center of the pupil, the iris radius, and the external shape of the eye. These features are extracted using a multistage algorithm. On the first stage the pupil center is localized using a fast circular symmetry detector and the iris radius is computed using radial gradient projections, and on the second stage the external shape of the eye (of the eyelids is determined through a Monte Carlo sampling framework based on both color and shape information. Extensive experiments performed on a different dataset demonstrate the effectiveness of our approach. In addition, this work provides eye annotation data for a publicly-available database.

  12. Evaluation of wavelet spectral features in pathological detection and discrimination of yellow rust and powdery mildew in winter wheat with hyperspectral reflectance data

    Science.gov (United States)

    Shi, Yue; Huang, Wenjiang; Zhou, Xianfeng

    2017-04-01

    Hyperspectral absorption features are important indicators of characterizing plant biophysical variables for the automatic diagnosis of crop diseases. Continuous wavelet analysis has proven to be an advanced hyperspectral analysis technique for extracting absorption features; however, specific wavelet features (WFs) and their relationship with pathological characteristics induced by different infestations have rarely been summarized. The aim of this research is to determine the most sensitive WFs for identifying specific pathological lesions from yellow rust and powdery mildew in winter wheat, based on 314 hyperspectral samples measured in field experiments in China in 2002, 2003, 2005, and 2012. The resultant WFs could be used as proxies to capture the major spectral absorption features caused by infestation of yellow rust or powdery mildew. Multivariate regression analysis based on these WFs outperformed conventional spectral features in disease detection; meanwhile, a Fisher discrimination model exhibited considerable potential for generating separable clusters for each infestation. Optimal classification returned an overall accuracy of 91.9% with a Kappa of 0.89. This paper also emphasizes the WFs and their relationship with pathological characteristics in order to provide a foundation for the further application of this approach in monitoring winter wheat diseases at the regional scale.

  13. Building an intrusion detection system using a filter-based feature selection algorithm

    NARCIS (Netherlands)

    Ambusaidi, Mohammed A.; He, Xiangjian; Nanda, Priyadarsi; Tan, Zhiyuan

    2016-01-01

    Redundant and irrelevant features in data have caused a long-term problem in network traffic classification. These features not only slow down the process of classification but also prevent a classifier from making accurate decisions, especially when coping with big data. In this paper, we propose a

  14. Species composition and bathymetric distribution of gorgonians (Anthozoa: Octocorallia on the Southern Mexican Pacific coast

    Directory of Open Access Journals (Sweden)

    Rosalinda Abeytia

    2013-09-01

    Full Text Available Gorgonians are important components of coastal ecosystems, as they provide niches, natural compounds with medical applications and are used as bioindicators. Species composition and assemblage structure of gorgonians (Anthozoa: Octocorallia were studied along a bathymetric profile in the Southern Mexican Pacific coast. Species composition was based on specimens collected within a depth range of 0-70m in 15 sites. The relative abundance of species was determined in six sites at four depths (5, 10, 20 and 25m using three 10m2 transects at each depth level. Twenty-seven species of gorgonians belonging to six genera and three families were registered. The species composition varied with depth: 11 species were distributed between 0-25m depth, while 17 species were found between 40-70m depth interval. The shallow zone is characterized by a relatively large abundance of gorgonians, dominated by colonies of Leptogorgia cuspidata and L. ena. In contrast, the deepest zone was characterized by relatively low abundance of gorgonians, dominated by L. alba, the only species observed in both depth intervals. The similarity analysis showed differences in the composition and abundance of species by depth and site, suggesting that the main factor in determining the assemblage structure is depth. Results of this study suggest that the highest richness of gorgonian species in the study area may be located at depths of 40-70m, whereas the highest abundances are found between 5 and 10m depth. This study represents a contribution to the poorly known eastern Pacific gorgonian biota.

  15. A multi-feature integration method for fatigue crack detection and crack length estimation in riveted lap joints using Lamb waves

    Science.gov (United States)

    He, Jingjing; Guan, Xuefei; Peng, Tishun; Liu, Yongming; Saxena, Abhinav; Celaya, Jose; Goebel, Kai

    2013-10-01

    This paper presents an experimental study of damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in situ non-destructive evaluation (NDE) during fatigue cyclical loading. PZT wafers are used to monitor the wave reflection from the boundaries of the fatigue crack at the edge of bolt joints. The group velocity of the guided wave is calculated to select a proper time window in which the received signal contains the damage information. It is found that the fatigue crack lengths are correlated with three main features of the signal, i.e., correlation coefficient, amplitude change, and phase change. It was also observed that a single feature cannot be used to quantify the damage among different specimens since a considerable variability was observed in the response from different specimens. A multi-feature integration method based on a second-order multivariate regression analysis is proposed for the prediction of fatigue crack lengths using sensor measurements. The model parameters are obtained using training datasets from five specimens. The effectiveness of the proposed methodology is demonstrated using several lap joint specimens from different manufactures and under different loading conditions.

  16. A multi-feature integration method for fatigue crack detection and crack length estimation in riveted lap joints using Lamb waves

    International Nuclear Information System (INIS)

    He, Jingjing; Guan, Xuefei; Peng, Tishun; Liu, Yongming; Saxena, Abhinav; Celaya, Jose; Goebel, Kai

    2013-01-01

    This paper presents an experimental study of damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in situ non-destructive evaluation (NDE) during fatigue cyclical loading. PZT wafers are used to monitor the wave reflection from the boundaries of the fatigue crack at the edge of bolt joints. The group velocity of the guided wave is calculated to select a proper time window in which the received signal contains the damage information. It is found that the fatigue crack lengths are correlated with three main features of the signal, i.e., correlation coefficient, amplitude change, and phase change. It was also observed that a single feature cannot be used to quantify the damage among different specimens since a considerable variability was observed in the response from different specimens. A multi-feature integration method based on a second-order multivariate regression analysis is proposed for the prediction of fatigue crack lengths using sensor measurements. The model parameters are obtained using training datasets from five specimens. The effectiveness of the proposed methodology is demonstrated using several lap joint specimens from different manufactures and under different loading conditions. (paper)

  17. Fast region-based object detection and tracking using correlation of features

    CSIR Research Space (South Africa)

    Senekal, F

    2010-11-01

    Full Text Available and track a target object (or objects) over a series of digital images. Visual target tracking can be accomplished by feature-based or region-based approaches. In feature-based approaches, interest points are calculated in a digital image, and a local...-time performance based on the computational power that is available on a specific platform. To further reduce the computational requirements, process- ing is restricted to the region of interest (ROI). The region of interest is provided as an input parameter...

  18. Usage of polarisation features of landmines for improved automatic detection

    NARCIS (Netherlands)

    Jong, W. de; Cremer, F.; Schutte, K.; Storm, J.

    2000-01-01

    In this paper the landmine detection performance of an infrared and a visual light camera both equipped with a polarisation filter are compared with the detection performance of these cameras without polarisation filters. Sequences of images have been recorded with in front of these cameras a

  19. The Role of Attention in the Maintenance of Feature Bindings in Visual Short-term Memory

    Science.gov (United States)

    Johnson, Jeffrey S.; Hollingworth, Andrew; Luck, Steven J.

    2008-01-01

    This study examined the role of attention in maintaining feature bindings in visual short-term memory. In a change-detection paradigm, participants attempted to detect changes in the colors and orientations of multiple objects; the changes consisted of new feature values in a feature-memory condition and changes in how existing feature values were…

  20. Derivative-based scale invariant image feature detector with error resilience.

    Science.gov (United States)

    Mainali, Pradip; Lafruit, Gauthier; Tack, Klaas; Van Gool, Luc; Lauwereins, Rudy

    2014-05-01

    We present a novel scale-invariant image feature detection algorithm (D-SIFER) using a newly proposed scale-space optimal 10th-order Gaussian derivative (GDO-10) filter, which reaches the jointly optimal Heisenberg's uncertainty of its impulse response in scale and space simultaneously (i.e., we minimize the maximum of the two moments). The D-SIFER algorithm using this filter leads to an outstanding quality of image feature detection, with a factor of three quality improvement over state-of-the-art scale-invariant feature transform (SIFT) and speeded up robust features (SURF) methods that use the second-order Gaussian derivative filters. To reach low computational complexity, we also present a technique approximating the GDO-10 filters with a fixed-length implementation, which is independent of the scale. The final approximation error remains far below the noise margin, providing constant time, low cost, but nevertheless high-quality feature detection and registration capabilities. D-SIFER is validated on a real-life hyperspectral image registration application, precisely aligning up to hundreds of successive narrowband color images, despite their strong artifacts (blurring, low-light noise) typically occurring in such delicate optical system setups.

  1. Fusion of Heterogeneous Intrusion Detection Systems for Network Attack Detection

    Directory of Open Access Journals (Sweden)

    Jayakumar Kaliappan

    2015-01-01

    Full Text Available An intrusion detection system (IDS helps to identify different types of attacks in general, and the detection rate will be higher for some specific category of attacks. This paper is designed on the idea that each IDS is efficient in detecting a specific type of attack. In proposed Multiple IDS Unit (MIU, there are five IDS units, and each IDS follows a unique algorithm to detect attacks. The feature selection is done with the help of genetic algorithm. The selected features of the input traffic are passed on to the MIU for processing. The decision from each IDS is termed as local decision. The fusion unit inside the MIU processes all the local decisions with the help of majority voting rule and makes the final decision. The proposed system shows a very good improvement in detection rate and reduces the false alarm rate.

  2. Feature extraction with deep neural networks by a generalized discriminant analysis.

    Science.gov (United States)

    Stuhlsatz, André; Lippel, Jens; Zielke, Thomas

    2012-04-01

    We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.

  3. Iris features-based heart disease diagnosis by computer vision

    Science.gov (United States)

    Nguchu, Benedictor A.; Li, Li

    2017-07-01

    The study takes advantage of several new breakthroughs in computer vision technology to develop a new mid-irisbiomedical platform that processes iris image for early detection of heart-disease. Guaranteeing early detection of heart disease provides a possibility of having non-surgical treatment as suggested by biomedical researchers and associated institutions. However, our observation discovered that, a clinical practicable solution which could be both sensible and specific for early detection is still lacking. Due to this, the rate of majority vulnerable to death is highly increasing. The delayed diagnostic procedures, inefficiency, and complications of available methods are the other reasons for this catastrophe. Therefore, this research proposes the novel IFB (Iris Features Based) method for diagnosis of premature, and early stage heart disease. The method incorporates computer vision and iridology to obtain a robust, non-contact, nonradioactive, and cost-effective diagnostic tool. The method analyzes abnormal inherent weakness in tissues, change in color and patterns, of a specific region of iris that responds to impulses of heart organ as per Bernard Jensen-iris Chart. The changes in iris infer the presence of degenerative abnormalities in heart organ. These changes are precisely detected and analyzed by IFB method that includes, tensor-based-gradient(TBG), multi orientations gabor filters(GF), textural oriented features(TOF), and speed-up robust features(SURF). Kernel and Multi class oriented support vector machines classifiers are used for classifying normal and pathological iris features. Experimental results demonstrated that the proposed method, not only has better diagnostic performance, but also provides an insight for early detection of other diseases.

  4. Detection Range of Airborne Magnetometers in Magnetic Anomaly Detection

    Directory of Open Access Journals (Sweden)

    Chengjing Li

    2015-11-01

    Full Text Available Airborne magnetometers are utilized for the small-range search, precise positioning, and identification of the ferromagnetic properties of underwater targets. As an important performance parameter of sensors, the detection range of airborne magnetometers is commonly set as a fixed value in references regardless of the influences of environment noise, target magnetic properties, and platform features in a classical model to detect airborne magnetic anomalies. As a consequence, deviation in detection ability analysis is observed. In this study, a novel detection range model is proposed on the basis of classic detection range models of airborne magnetometers. In this model, probability distribution is applied, and the magnetic properties of targets and the environment noise properties of a moving submarine are considered. The detection range model is also constructed by considering the distribution of the moving submarine during detection. A cell-averaging greatest-of-constant false alarm rate test method is also used to calculate the detection range of the model at a desired false alarm rate. The detection range model is then used to establish typical submarine search probabilistic models. Results show that the model can be used to evaluate not only the effects of ambient magnetic noise but also the moving and geomagnetic features of the target and airborne detection platform. The model can also be utilized to display the actual operating range of sensor systems.

  5. Detection of the 3.4 micron emission feature in Comets P/Brorsen-Metcalf and Okazaki-Levy-Rudenko (1989r) and an observational summary

    International Nuclear Information System (INIS)

    Brooke, T.Y.; Tokunaga, A.T.; Knacke, R.F.

    1991-01-01

    The 3.4 micron emission feature due to cometary organics was detected in Comets P/Brorsen-Metcalf and Okazaki-Levy-Rudenko (1989r). Features-to-continuum ratios in these two comets were higher than those expected from the trend seen in other comets to date. Three micron spectra of eight comets are reviewed. The 3.4 micron band flux is better correlated with the water production rate than with the dust production rate in this sample of comets. High feature-to-continuum ratios in P/Brorsen-Metcalf and Okazaki-Levy-Rudenko can be explained by the low dust-to-gas ratios of these two comets. The observations to date are consistent with cometary organics being present in all comets (even those for which no 3.4 micron feature was evident) at comparable abundances with respect to water. The emission mechanism and absolute abundance of the organics are not well determined; either gas-phase fluorescence or thermal emission from hot grains is consistent with the heliocentric distance dependence of the 3.4 micron band flux. There is an overall similarity in the spectral profiles of the 3.4 micron feature in comets; however, there are some potentially significant differences in the details of the spectra. 30 refs

  6. Innovative R.E.A. tools for integrated bathymetric survey

    Science.gov (United States)

    Demarte, Maurizio; Ivaldi, Roberta; Sinapi, Luigi; Bruzzone, Gabriele; Caccia, Massimo; Odetti, Angelo; Fontanelli, Giacomo; Masini, Andrea; Simeone, Emilio

    2017-04-01

    sensors useful for seabed analysis. The very stable platform located on the top of the USV allows for taking-off and landing of the RPAS. By exploiting its higher power autonomy and load capability, the USV will be used as a mothership for the RPAS. In particular, during the missions the USV will be able to furnish recharging possibility for the RPAS and it will be able to function as a bridge for the communication between the RPAS and its control station. The main advantage of the system is the remote acquisition of high-resolution bathymetric data from RPAS in areas where the possibility to have a systematic and traditional survey are few or none. These tools (USV carrying an RPAS with Hyperspectral camera) constitute an innovative and powerful system that gives to the Emergency Response Unit the right instruments to react quickly. The developing of this support could be solve the classical conflict between resolution, needed to capture the fine scale variability and coverage, needed for the large environmental phenomena, with very high variability over a wide range of spatial and temporal scales as the coastal environment.

  7. Tidal asymmetries of velocity and stratification over a bathymetric depression in a tropical inlet

    Science.gov (United States)

    Waterhouse, Amy F.; Valle-Levinson, Arnoldo; Morales Pérez, Rubén A.

    2012-10-01

    Observations of current velocity, sea surface elevation and vertical profiles of density were obtained in a tropical inlet to determine the effect of a bathymetric depression (hollow) on the tidal flows. Surveys measuring velocity profiles were conducted over a diurnal tidal cycle with mixed spring tides during dry and wet seasons. Depth-averaged tidal velocities during ebb and flood tides behaved according to Bernoulli dynamics, as expected. The dynamic balance of depth-averaged quantities in the along-channel direction was governed by along-channel advection and pressure gradients with baroclinic pressure gradients only being important during the wet season. The vertical structure of the along-channel flow during flood tides exhibited a mid-depth maximum with lateral shear enhanced during the dry season as a result of decreased vertical stratification. During ebb tides, along-channel velocities in the vicinity of the hollow were vertically sheared with a weak return flow at depth due to choking of the flow on the seaward slope of the hollow. The potential energy anomaly, a measure of the amount of energy required to fully mix the water column, showed two peaks in stratification associated with ebb tide and a third peak occurring at the beginning of flood. After the first mid-ebb peak in stratification, ebb flows were constricted on the seaward slope of the hollow resulting in a bottom return flow. The sinking of surface waters and enhanced mixing on the seaward slope of the hollow reduced the potential energy anomaly after maximum ebb. The third peak in stratification during early flood occurred as a result of denser water entering the inlet at mid-depth. This dense water mixed with ambient deep waters increasing the stratification. Lateral shear in the along-channel flow across the hollow allowed trapping of less dense water in the surface layers further increasing stratification.

  8. SU-F-R-17: Advancing Glioblastoma Multiforme (GBM) Recurrence Detection with MRI Image Texture Feature Extraction and Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Yu, V; Ruan, D; Nguyen, D; Kaprealian, T; Chin, R; Sheng, K [UCLA School of Medicine, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To test the potential of early Glioblastoma Multiforme (GBM) recurrence detection utilizing image texture pattern analysis in serial MR images post primary treatment intervention. Methods: MR image-sets of six time points prior to the confirmed recurrence diagnosis of a GBM patient were included in this study, with each time point containing T1 pre-contrast, T1 post-contrast, T2-Flair, and T2-TSE images. Eight Gray-level co-occurrence matrix (GLCM) texture features including Contrast, Correlation, Dissimilarity, Energy, Entropy, Homogeneity, Sum-Average, and Variance were calculated from all images, resulting in a total of 32 features at each time point. A confirmed recurrent volume was contoured, along with an adjacent non-recurrent region-of-interest (ROI) and both volumes were propagated to all prior time points via deformable image registration. A support vector machine (SVM) with radial-basis-function kernels was trained on the latest time point prior to the confirmed recurrence to construct a model for recurrence classification. The SVM model was then applied to all prior time points and the volumes classified as recurrence were obtained. Results: An increase in classified volume was observed over time as expected. The size of classified recurrence maintained at a stable level of approximately 0.1 cm{sup 3} up to 272 days prior to confirmation. Noticeable volume increase to 0.44 cm{sup 3} was demonstrated at 96 days prior, followed by significant increase to 1.57 cm{sup 3} at 42 days prior. Visualization of the classified volume shows the merging of recurrence-susceptible region as the volume change became noticeable. Conclusion: Image texture pattern analysis in serial MR images appears to be sensitive to detecting the recurrent GBM a long time before the recurrence is confirmed by a radiologist. The early detection may improve the efficacy of targeted intervention including radiosurgery. More patient cases will be included to create a generalizable

  9. Two-stage Keypoint Detection Scheme for Region Duplication Forgery Detection in Digital Images.

    Science.gov (United States)

    Emam, Mahmoud; Han, Qi; Zhang, Hongli

    2018-01-01

    In digital image forensics, copy-move or region duplication forgery detection became a vital research topic recently. Most of the existing keypoint-based forgery detection methods fail to detect the forgery in the smooth regions, rather than its sensitivity to geometric changes. To solve these problems and detect points which cover all the regions, we proposed two steps for keypoint detection. First, we employed the scale-invariant feature operator to detect the spatially distributed keypoints from the textured regions. Second, the keypoints from the missing regions are detected using Harris corner detector with nonmaximal suppression to evenly distribute the detected keypoints. To improve the matching performance, local feature points are described using Multi-support Region Order-based Gradient Histogram descriptor. Based on precision-recall rates and commonly tested dataset, comprehensive performance evaluation is performed. The results demonstrated that the proposed scheme has better detection and robustness against some geometric transformation attacks compared with state-of-the-art methods. © 2017 American Academy of Forensic Sciences.

  10. Feature-guided analysis for reduction of false positives in CAD of polyps for computed tomographic colonography

    International Nuclear Information System (INIS)

    Naeppi, Janne; Yoshida, Hiroyuki

    2003-01-01

    We evaluated the effect of our novel technique of feature-guided analysis of polyps on the reduction of false-positive (FP) findings generated by our computer-aided diagnosis (CAD) scheme for the detection of polyps from computed tomography colonographic data sets. The detection performance obtained by use of feature-guided analysis in the segmentation and feature analysis of polyp candidates was compared with that obtained by use of our previously employed fuzzy clustering technique. We also evaluated the effect of a feature called modified gradient concentration (MGC) on the detection performance. A total of 144 data sets, representing prone and supine views of 72 patients that included 14 patients with 21 colorectal polyps 5-25 mm in diameter, were used in the evaluation. At a 100% by-patient (95% by-polyp) detection sensitivity, the FP rate of our CAD scheme with feature-guided analysis based on round-robin evaluation was 1.3 (1.5) FP detections per patient. This corresponds to a 70-75 % reduction in the number of FPs obtained by use of fuzzy clustering at the same sensitivity levels. Application of the MGC feature instead of our previously used gradient concentration feature did not improve the detection result. The results indicate that feature-guided analysis is useful for achieving high sensitivity and a low FP rate in our CAD scheme

  11. Computational modeling of river flow using bathymetry collected with an experimental, water-penetrating, green LiDAR

    Science.gov (United States)

    Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.

    2009-12-01

    Airborne bathymetric Light Detection and Ranging (LiDAR) systems designed for coastal and marine surveys are increasingly being deployed in fluvial environments. While the adaptation of this technology to rivers and streams would appear to be straightforward, currently technical challenges remain with regard to achieving high levels of vertical accuracy and precision when mapping bathymetry in shallow fluvial settings. Collectively these mapping errors have a direct bearing on hydraulic model predictions made using these data. We compared channel surveys conducted along the Platte River, Nebraska, and the Trinity River, California, using conventional ground-based methods with those made with the hybrid topographic/bathymetric Experimental Advanced Airborne Research LiDAR (EAARL). In the turbid and braided Platte River, a bathymetric-waveform processing algorithm was shown to enhance the definition of thalweg channels over a more simplified, first-surface waveform processing algorithm. Consequently flow simulations using data processed with the shallow bathymetric algorithm resulted in improved prediction of wetted area relative to the first-surface algorithm, when compared to the wetted area in concurrent aerial imagery. However, when compared to using conventionally collected data for flow modeling, the inundation extent was over predicted with the EAARL topography due to higher bed elevations measured by the LiDAR. In the relatively clear, meandering Trinity River, bathymetric processing algorithms were capable of defining a 3 meter deep pool. However, a similar bias in depth measurement was observed, with the LiDAR measuring the elevation of the river bottom above its actual position, resulting in a predicted water surface higher than that measured by field data. This contribution addresses the challenge of making bathymetric measurements with the EAARL in different environmental conditions encountered in fluvial settings, explores technical issues related to

  12. An OMIC biomarker detection algorithm TriVote and its application in methylomic biomarker detection.

    Science.gov (United States)

    Xu, Cheng; Liu, Jiamei; Yang, Weifeng; Shu, Yayun; Wei, Zhipeng; Zheng, Weiwei; Feng, Xin; Zhou, Fengfeng

    2018-04-01

    Transcriptomic and methylomic patterns represent two major OMIC data sources impacted by both inheritable genetic information and environmental factors, and have been widely used as disease diagnosis and prognosis biomarkers. Modern transcriptomic and methylomic profiling technologies detect the status of tens of thousands or even millions of probing residues in the human genome, and introduce a major computational challenge for the existing feature selection algorithms. This study proposes a three-step feature selection algorithm, TriVote, to detect a subset of transcriptomic or methylomic residues with highly accurate binary classification performance. TriVote outperforms both filter and wrapper feature selection algorithms with both higher classification accuracy and smaller feature number on 17 transcriptomes and two methylomes. Biological functions of the methylome biomarkers detected by TriVote were discussed for their disease associations. An easy-to-use Python package is also released to facilitate the further applications.

  13. A threshold auto-adjustment algorithm of feature points extraction based on grid

    Science.gov (United States)

    Yao, Zili; Li, Jun; Dong, Gaojie

    2018-02-01

    When dealing with high-resolution digital images, detection of feature points is usually the very first important step. Valid feature points depend on the threshold. If the threshold is too low, plenty of feature points will be detected, and they may be aggregated in the rich texture regions, which consequently not only affects the speed of feature description, but also aggravates the burden of following processing; if the threshold is set high, the feature points in poor texture area will lack. To solve these problems, this paper proposes a threshold auto-adjustment method of feature extraction based on grid. By dividing the image into numbers of grid, threshold is set in every local grid for extracting the feature points. When the number of feature points does not meet the threshold requirement, the threshold will be adjusted automatically to change the final number of feature points The experimental results show that feature points produced by our method is more uniform and representative, which avoids the aggregation of feature points and greatly reduces the complexity of following work.

  14. Terrain feature recognition for synthetic aperture radar (SAR) imagery employing spatial attributes of targets

    Science.gov (United States)

    Iisaka, Joji; Sakurai-Amano, Takako

    1994-08-01

    This paper describes an integrated approach to terrain feature detection and several methods to estimate spatial information from SAR (synthetic aperture radar) imagery. Spatial information of image features as well as spatial association are key elements in terrain feature detection. After applying a small feature preserving despeckling operation, spatial information such as edginess, texture (smoothness), region-likeliness and line-likeness of objects, target sizes, and target shapes were estimated. Then a trapezoid shape fuzzy membership function was assigned to each spatial feature attribute. Fuzzy classification logic was employed to detect terrain features. Terrain features such as urban areas, mountain ridges, lakes and other water bodies as well as vegetated areas were successfully identified from a sub-image of a JERS-1 SAR image. In the course of shape analysis, a quantitative method was developed to classify spatial patterns by expanding a spatial pattern through the use of a series of pattern primitives.

  15. Sensitivity to feature displacement in familiar and unfamiliar faces: beyond the internal/external feature distinction.

    Science.gov (United States)

    Brooks, Kevin R; Kemp, Richard I

    2007-01-01

    Previous studies of face recognition and of face matching have shown a general improvement for the processing of internal features as a face becomes more familiar to the participant. In this study, we used a psychophysical two-alternative forced-choice paradigm to investigate thresholds for the detection of a displacement of the eyes, nose, mouth, or ears for familiar and unfamiliar faces. No clear division between internal and external features was observed. Rather, for familiar (compared to unfamiliar) faces participants were more sensitive to displacements of internal features such as the eyes or the nose; yet, for our third internal feature-the mouth no such difference was observed. Despite large displacements, many subjects were unable to perform above chance when stimuli involved shifts in the position of the ears. These results are consistent with the proposal that familiarity effects may be mediated by the construction of a robust representation of a face, although the involvement of attention in the encoding of face stimuli cannot be ruled out. Furthermore, these effects are mediated by information from a spatial configuration of features, rather than by purely feature-based information.

  16. Feature-Learning-Based Printed Circuit Board Inspection via Speeded-Up Robust Features and Random Forest

    Directory of Open Access Journals (Sweden)

    Eun Hye Yuk

    2018-06-01

    Full Text Available With the coming of the 4th industrial revolution era, manufacturers produce high-tech products. As the production process is refined, inspection technologies become more important. Specifically, the inspection of a printed circuit board (PCB, which is an indispensable part of electronic products, is an essential step to improve the quality of the process and yield. Image processing techniques are utilized for inspection, but there are limitations because the backgrounds of images are different and the kinds of defects increase. In order to overcome these limitations, methods based on machine learning have been used recently. These methods can inspect without a normal image by learning fault patterns. Therefore, this paper proposes a method can detect various types of defects using machine learning. The proposed method first extracts features through speeded-up robust features (SURF, then learns the fault pattern and calculates probabilities. After that, we generate a weighted kernel density estimation (WKDE map weighted by the probabilities to consider the density of the features. Because the probability of the WKDE map can detect an area where the defects are concentrated, it improves the performance of the inspection. To verify the proposed method, we apply the method to PCB images and confirm the performance of the method.

  17. Improved Object Proposals with Geometrical Features for Autonomous Driving

    Directory of Open Access Journals (Sweden)

    Yiliu Feng

    2017-01-01

    Full Text Available This paper aims at generating high-quality object proposals for object detection in autonomous driving. Most existing proposal generation methods are designed for the general object detection, which may not perform well in a particular scene. We propose several geometrical features suited for autonomous driving and integrate them into state-of-the-art general proposal generation methods. In particular, we formulate the integration as a feature fusion problem by fusing the geometrical features with existing proposal generation methods in a Bayesian framework. Experiments on the challenging KITTI benchmark demonstrate that our approach improves the existing methods significantly. Combined with a convolutional neural net detector, our approach achieves state-of-the-art performance on all three KITTI object classes.

  18. Feature Extraction in Radar Target Classification

    Directory of Open Access Journals (Sweden)

    Z. Kus

    1999-09-01

    Full Text Available This paper presents experimental results of extracting features in the Radar Target Classification process using the J frequency band pulse radar. The feature extraction is based on frequency analysis methods, the discrete-time Fourier Transform (DFT and Multiple Signal Characterisation (MUSIC, based on the detection of Doppler effect. The analysis has turned to the preference of DFT with implemented Hanning windowing function. We assumed to classify targets-vehicles into two classes, the wheeled vehicle and tracked vehicle. The results show that it is possible to classify them only while moving. The feature of the class results from a movement of moving parts of the vehicle. However, we have not found any feature to classify the wheeled and tracked vehicles while non-moving, although their engines are on.

  19. Research on driver fatigue detection

    Science.gov (United States)

    Zhang, Ting; Chen, Zhong; Ouyang, Chao

    2018-03-01

    Driver fatigue is one of the main causes of frequent traffic accidents. In this case, driver fatigue detection system has very important significance in avoiding traffic accidents. This paper presents a real-time method based on fusion of multiple facial features, including eye closure, yawn and head movement. The eye state is classified as being open or closed by a linear SVM classifier trained using HOG features of the detected eye. The mouth state is determined according to the width-height ratio of the mouth. The head movement is detected by head pitch angle calculated by facial landmark. The driver's fatigue state can be reasoned by the model trained by above features. According to experimental results, drive fatigue detection obtains an excellent performance. It indicates that the developed method is valuable for the application of avoiding traffic accidents caused by driver's fatigue.

  20. Principal Component and Cluster Analysis for determining diversification of bottom morphology based on bathymetric profiles from Brepollen (Hornsund, Spitsbergen* The project was partly supported by The Polish Ministry of Sciences and Higher Education Grant No. N N525 350038.

    Directory of Open Access Journals (Sweden)

    Mateusz Moskalik

    2014-01-01

    Full Text Available Navigation charts of the post-glacial regions of Arctic fjords tend not to cover regions from which glaciers have retreated. Whilst research vessels can make detailed bathymetric models using multibeam echosounders, they are often too large to enter such areas. To map these regions therefore requires smaller boats carrying single beam echosounders. To obtain morphology models of equivalent quality to those generated using multibeam echosounders, new ways of processing data from single beam echosounders have to be found. The results and comprehensive analysis of such measurements conducted in Brepollen (Hornsund, Spitsbergen are presented in this article. The morphological differentiation of the seafloor was determined by calculating statistical, spectral and wavelet transformation, fractal and median filtration parameters of segments of bathymetric profiles. This set of parameters constituted the input for Principal Component Analysis and then in the form of Principal Components for the Cluster Analysis. As a result of this procedure, three morphological classes are proposed for Brepollen: (i steep slopes (southern Brepollen, (ii flat bottoms (central Brepollen and gentle slopes (the Storebreen glacier valley and the southern part of the Hornbreen glacier valley, (iii the morphologically most diverse region (the central Storebreen valley, the northern part of the Hornbreen glacier valley and the north-eastern part of central Brepollen.