WorldWideScience

Sample records for source localization algorithms

  1. Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments

    Science.gov (United States)

    Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.

    2008-04-01

    We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.

  2. Mixed Far-Field and Near-Field Source Localization Algorithm via Sparse Subarrays

    Directory of Open Access Journals (Sweden)

    Jiaqi Song

    2018-01-01

    Full Text Available Based on a dual-size shift invariance sparse linear array, this paper presents a novel algorithm for the localization of mixed far-field and near-field sources. First, by constructing a cumulant matrix with only direction-of-arrival (DOA information, the proposed algorithm decouples the DOA estimation from the range estimation. The cumulant-domain quarter-wavelength invariance yields unambiguous estimates of DOAs, which are then used as coarse references to disambiguate the phase ambiguities in fine estimates induced from the larger spatial invariance. Then, based on the estimated DOAs, another cumulant matrix is derived and decoupled to generate unambiguous and cyclically ambiguous estimates of range parameter. According to the coarse range estimation, the types of sources can be identified and the unambiguous fine range estimates of NF sources are obtained after disambiguation. Compared with some existing algorithms, the proposed algorithm enjoys extended array aperture and higher estimation accuracy. Simulation results are given to validate the performance of the proposed algorithm.

  3. Analytic and Unambiguous Phase-Based Algorithm for 3-D Localization of a Single Source with Uniform Circular Array

    Directory of Open Access Journals (Sweden)

    Le Zuo

    2018-02-01

    Full Text Available This paper presents an analytic algorithm for estimating three-dimensional (3-D localization of a single source with uniform circular array (UCA interferometers. Fourier transforms are exploited to expand the phase distribution of a single source and the localization problem is reformulated as an equivalent spectrum manipulation problem. The 3-D parameters are decoupled to different spectrums in the Fourier domain. Algebraic relations are established between the 3-D localization parameters and the Fourier spectrums. Fourier sampling theorem ensures that the minimum element number for 3-D localization of a single source with a UCA is five. Accuracy analysis provides mathematical insights into the 3-D localization algorithm that larger number of elements gives higher estimation accuracy. In addition, the phase-based high-order difference invariance (HODI property of a UCA is found and exploited to realize phase range compression. Following phase range compression, ambiguity resolution is addressed by the HODI of a UCA. A major advantage of the algorithm is that the ambiguity resolution and 3-D localization estimation are both analytic and are processed simultaneously, hence computationally efficient. Numerical simulations and experimental results are provided to verify the effectiveness of the proposed 3-D localization algorithm.

  4. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  5. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xinya [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Deng, Zhiqun Daniel [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Rauchenstein, Lynn T. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Carlson, Thomas J. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based and maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  6. Improving the efficiency of deconvolution algorithms for sound source localization

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.

    2015-01-01

    of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...

  7. Energy-Based Acoustic Source Localization Methods: A Survey

    Directory of Open Access Journals (Sweden)

    Wei Meng

    2017-02-01

    Full Text Available Energy-based source localization is an important problem in wireless sensor networks (WSNs, which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE and nonlinear-least-squares (NLS methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions.

  8. Multiple Signal Classification Algorithm Based Electric Dipole Source Localization Method in an Underwater Environment

    Directory of Open Access Journals (Sweden)

    Yidong Xu

    2017-10-01

    Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.

  9. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    Science.gov (United States)

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972

  10. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    Directory of Open Access Journals (Sweden)

    Jian Wan

    2011-06-01

    Full Text Available This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.

  11. A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI.

    Science.gov (United States)

    Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao

    2016-03-19

    Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning.

  12. Three-dimensional localization of low activity gamma-ray sources in real-time scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Manish K., E-mail: mksrkf@mst.edu; Alajo, Ayodeji B.; Lee, Hyoung K.

    2016-03-21

    Radioactive source localization plays an important role in tracking radiation threats in homeland security tasks. Its real-time application requires computationally efficient and reasonably accurate algorithms even with limited data to support detection with minimum uncertainty. This paper describes a statistic-based grid-refinement method for backtracing the position of a gamma-ray source in a three-dimensional domain in real-time. The developed algorithm used measurements from various known detector positions to localize the source. This algorithm is based on an inverse-square relationship between source intensity at a detector and the distance from the source to the detector. The domain discretization was developed and implemented in MATLAB. The algorithm was tested and verified from simulation results of an ideal case of a point source in non-attenuating medium. Subsequently, an experimental validation of the algorithm was performed to determine the suitability of deploying this scheme in real-time scenarios. Using the measurements from five known detector positions and for a measurement time of 3 min, the source position was estimated with an accuracy of approximately 53 cm. The accuracy improved and stabilized to approximately 25 cm for higher measurement times. It was concluded that the error in source localization was primarily due to detection uncertainties. In verification and experimental validation of the algorithm, the distance between {sup 137}Cs source and any detector position was between 0.84 m and 1.77 m. The results were also compared with the least squares method. Since the discretization algorithm was validated with a weak source, it is expected that it can localize the source of higher activity in real-time. It is believed that for the same physical placement of source and detectors, a source of approximate activity 0.61–0.92 mCi can be localized in real-time with 1 s of measurement time and same accuracy. The accuracy and computational

  13. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    Science.gov (United States)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  14. A Study on Water Pollution Source Localization in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2016-01-01

    Full Text Available The water pollution source localization is of great significance to water environment protection. In this paper, a study on water pollution source localization is presented. Firstly, the source detection is discussed. Then, the coarse localization methods and the localization methods based on diffusion models are introduced and analyzed, respectively. In addition, the localization method based on the contour is proposed. The detection and localization methods are compared in experiments finally. The results show that the detection method using hypotheses testing is more stable. The performance of the coarse localization algorithm depends on the nodes density. The localization based on the diffusion model can yield precise localization results; however, the results are not stable. The localization method based on the contour is better than the other two localization methods when the concentration contours are axisymmetric. Thus, in the water pollution source localization, the detection using hypotheses testing is more preferable in the source detection step. If concentration contours are axisymmetric, the localization method based on the contour is the first option. And, in case the nodes are dense and there is no explicit diffusion model, the coarse localization algorithm can be used, or else the localization based on diffusion models is a good choice.

  15. Robust iterative observer for source localization for Poisson equation

    KAUST Repository

    Majeed, Muhammad Usman

    2017-01-05

    Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end.

  16. Robust iterative observer for source localization for Poisson equation

    KAUST Repository

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2017-01-01

    Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end.

  17. Localization Algorithms of Underwater Wireless Sensor Networks: A Survey

    Science.gov (United States)

    Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng

    2012-01-01

    In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs. PMID:22438752

  18. RSS-based localization of isotropically decaying source with unknown power and pathloss factor

    International Nuclear Information System (INIS)

    Sun, Shunyuan; Sun, Li; Ding, Zhiguo

    2016-01-01

    This paper addresses the localization of an isotropically decaying source based on the received signal strength (RSS) measurements that are collected from nearby active sensors that are position-known and wirelessly connected, and it propose a novel iterative algorithm for RSS-based source localization in order to improve the location accuracy and realize real-time location and automatic monitoring for hospital patients and medical equipment in the smart hospital. In particular, we consider the general case where the source power and pathloss factor are both unknown. For such a source localization problem, we propose an iterative algorithm, in which the unknown source position and two other unknown parameters (i.e. the source power and pathloss factor) are estimated in an alternating way based on each other, with our proposed sub-optimum initial estimate on source position obtained based on the RSS measurements that are collected from a few (closest) active sensors with largest RSS values. Analysis and simulation study show that our proposed iterative algorithm guarantees globally convergence to the least-squares (LS) solution, where for our suitably assumed independent and identically distributed (i.i.d.) zero-mean Gaussian RSS measurement errors the converged localization performance achieves the optimum that corresponds to the Cramer–Rao lower bound (CRLB).

  19. Hybrid Firefly Variants Algorithm for Localization Optimization in WSN

    Directory of Open Access Journals (Sweden)

    P. SrideviPonmalar

    2017-01-01

    Full Text Available Localization is one of the key issues in wireless sensor networks. Several algorithms and techniques have been introduced for localization. Localization is a procedural technique of estimating the sensor node location. In this paper, a novel three hybrid algorithms based on firefly is proposed for localization problem. Hybrid Genetic Algorithm-Firefly Localization Algorithm (GA-FFLA, Hybrid Differential Evolution-Firefly Localization Algorithm (DE-FFLA and Hybrid Particle Swarm Optimization -Firefly Localization Algorithm (PSO-FFLA are analyzed, designed and implemented to optimize the localization error. The localization algorithms are compared based on accuracy of estimation of location, time complexity and iterations required to achieve the accuracy. All the algorithms have hundred percent estimation accuracy but with variations in the number of firefliesr requirements, variation in time complexity and number of iteration requirements. Keywords: Localization; Genetic Algorithm; Differential Evolution; Particle Swarm Optimization

  20. A locally adaptive algorithm for shadow correction in color images

    Science.gov (United States)

    Karnaukhov, Victor; Kober, Vitaly

    2017-09-01

    The paper deals with correction of color images distorted by spatially nonuniform illumination. A serious distortion occurs in real conditions when a part of the scene containing 3D objects close to a directed light source is illuminated much brighter than the rest of the scene. A locally-adaptive algorithm for correction of shadow regions in color images is proposed. The algorithm consists of segmentation of shadow areas with rank-order statistics followed by correction of nonuniform illumination with human visual perception approach. The performance of the proposed algorithm is compared to that of common algorithms for correction of color images containing shadow regions.

  1. Near-Field Source Localization Using a Special Cumulant Matrix

    Science.gov (United States)

    Cui, Han; Wei, Gang

    A new near-field source localization algorithm based on a uniform linear array was proposed. The proposed algorithm estimates each parameter separately but does not need pairing parameters. It can be divided into two important steps. The first step is bearing-related electric angle estimation based on the ESPRIT algorithm by constructing a special cumulant matrix. The second step is the other electric angle estimation based on the 1-D MUSIC spectrum. It offers much lower computational complexity than the traditional near-field 2-D MUSIC algorithm and has better performance than the high-order ESPRIT algorithm. Simulation results demonstrate that the performance of the proposed algorithm is close to the Cramer-Rao Bound (CRB).

  2. Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN.

    Science.gov (United States)

    Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M

    2016-08-01

    One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.

  3. MR-based source localization for MR-guided HDR brachytherapy

    Science.gov (United States)

    Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.

    2018-04-01

    For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.

  4. Local Community Detection Algorithm Based on Minimal Cluster

    Directory of Open Access Journals (Sweden)

    Yong Zhou

    2016-01-01

    Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.

  5. Acoustic Source Localization and Beamforming: Theory and Practice

    Directory of Open Access Journals (Sweden)

    Chen Joe C

    2003-01-01

    Full Text Available We consider the theoretical and practical aspects of locating acoustic sources using an array of microphones. A maximum-likelihood (ML direct localization is obtained when the sound source is near the array, while in the far-field case, we demonstrate the localization via the cross bearing from several widely separated arrays. In the case of multiple sources, an alternating projection procedure is applied to determine the ML estimate of the DOAs from the observed data. The ML estimator is shown to be effective in locating sound sources of various types, for example, vehicle, music, and even white noise. From the theoretical Cramér-Rao bound analysis, we find that better source location estimates can be obtained for high-frequency signals than low-frequency signals. In addition, large range estimation error results when the source signal is unknown, but such unknown parameter does not have much impact on angle estimation. Much experimentally measured acoustic data was used to verify the proposed algorithms.

  6. Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhang Dongyang

    2014-02-01

    Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.

  7. Source localization using recursively applied and projected (RAP) MUSIC

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J.C. [Los Alamos National Lab., NM (United States); Leahy, R.M. [Univ. of Southern California, Los Angeles, CA (United States). Signal and Image Processing Inst.

    1998-03-01

    A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles, the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.

  8. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves

    Science.gov (United States)

    Chen, W.; Ni, S.; Wang, Z.

    2011-12-01

    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  9. Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data.

    Science.gov (United States)

    Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S

    2012-03-01

    In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. The effect of multimicrophone noise reduction systems on sound source localization by users of binaural hearing aids.

    Science.gov (United States)

    Van den Bogaert, Tim; Doclo, Simon; Wouters, Jan; Moonen, Marc

    2008-07-01

    This paper evaluates the influence of three multimicrophone noise reduction algorithms on the ability to localize sound sources. Two recently developed noise reduction techniques for binaural hearing aids were evaluated, namely, the binaural multichannel Wiener filter (MWF) and the binaural multichannel Wiener filter with partial noise estimate (MWF-N), together with a dual-monaural adaptive directional microphone (ADM), which is a widely used noise reduction approach in commercial hearing aids. The influence of the different algorithms on perceived sound source localization and their noise reduction performance was evaluated. It is shown that noise reduction algorithms can have a large influence on localization and that (a) the ADM only preserves localization in the forward direction over azimuths where limited or no noise reduction is obtained; (b) the MWF preserves localization of the target speech component but may distort localization of the noise component. The latter is dependent on signal-to-noise ratio and masking effects; (c) the MWF-N enables correct localization of both the speech and the noise components; (d) the statistical Wiener filter approach introduces a better combination of sound source localization and noise reduction performance than the ADM approach.

  11. Alternative confidence measure for local matching stereo algorithms

    CSIR Research Space (South Africa)

    Ndhlovu, T

    2009-11-01

    Full Text Available The authors present a confidence measure applied to individual disparity estimates in local matching stereo correspondence algorithms. It aims at identifying textureless areas, where most local matching algorithms fail. The confidence measure works...

  12. Source localization in an ocean waveguide using supervised machine learning.

    Science.gov (United States)

    Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter

    2017-09-01

    Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.

  13. Application of Matrix Pencil Algorithm to Mobile Robot Localization Using Hybrid DOA/TOA Estimation

    Directory of Open Access Journals (Sweden)

    Lan Anh Trinh

    2012-12-01

    Full Text Available Localization plays an important role in robotics for the tasks of monitoring, tracking and controlling a robot. Much effort has been made to address robot localization problems in recent years. However, despite many proposed solutions and thorough consideration, in terms of developing a low-cost and fast processing method for multiple-source signals, the robot localization problem is still a challenge. In this paper, we propose a solution for robot localization with regards to these concerns. In order to locate the position of a robot, both the coordinate and the orientation of a robot are necessary. We develop a localization method using the Matrix Pencil (MP algorithm for hybrid detection of direction of arrival (DOA and time of arrival (TOA. TOA of the signal is estimated for computing the distance between the mobile robot and a base station (BS. Based on the distance and the estimated DOA, we can estimate the mobile robot's position. The characteristics of the algorithm are examined through analysing simulated experiments and the results demonstrate the advantages of our method over previous works in dealing with the above challenges. The method is constructed based on the low-cost infrastructure of radio frequency devices; the DOA/TOA estimation is performed with just single value decomposition for fast processing. Finally, the MP algorithm combined with tracking using a Kalman filter allows our proposed method to locate the positions of multiple source signals.

  14. A range-based predictive localization algorithm for WSID networks

    Science.gov (United States)

    Liu, Yuan; Chen, Junjie; Li, Gang

    2017-11-01

    Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.

  15. Coded moderator approach for fast neutron source detection and localization at standoff

    Energy Technology Data Exchange (ETDEWEB)

    Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  16. Gas source localization and gas distribution mapping with a micro-drone

    International Nuclear Information System (INIS)

    Neumann, Patrick P.

    2013-01-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF-based GSL algorithm

  17. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF

  18. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF-based GSL algorithm

  19. EEG and MEG source localization using recursively applied (RAP) MUSIC

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J.C. [Los Alamos National Lab., NM (United States); Leahy, R.M. [University of Southern California, Los Angeles, CA (United States). Signal and Image Processing Inst.

    1996-12-31

    The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which uses the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.

  20. An improved cut-and-solve algorithm for the single-source capacitated facility location problem

    DEFF Research Database (Denmark)

    Gadegaard, Sune Lauth; Klose, Andreas; Nielsen, Lars Relund

    2018-01-01

    In this paper, we present an improved cut-and-solve algorithm for the single-source capacitated facility location problem. The algorithm consists of three phases. The first phase strengthens the integer program by a cutting plane algorithm to obtain a tight lower bound. The second phase uses a two......-level local branching heuristic to find an upper bound, and if optimality has not yet been established, the third phase uses the cut-and-solve framework to close the optimality gap. Extensive computational results are reported, showing that the proposed algorithm runs 10–80 times faster on average compared...

  1. Near-Field Source Localization by Using Focusing Technique

    Science.gov (United States)

    He, Hongyang; Wang, Yide; Saillard, Joseph

    2008-12-01

    We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.

  2. Near-Field Source Localization by Using Focusing Technique

    Directory of Open Access Journals (Sweden)

    Joseph Saillard

    2008-12-01

    Full Text Available We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007 is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics

  3. Error Estimation for the Linearized Auto-Localization Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Seco

    2012-02-01

    Full Text Available The Linearized Auto-Localization (LAL algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs, using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL, the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.

  4. A Source Identification Algorithm for INTEGRAL

    Science.gov (United States)

    Scaringi, Simone; Bird, Antony J.; Clark, David J.; Dean, Anthony J.; Hill, Adam B.; McBride, Vanessa A.; Shaw, Simon E.

    2008-12-01

    We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using Random Forests, is applied to the IBIS/ISGRI dataset in order to ease the production of unbiased future soft gamma-ray source catalogues. The key steps of candidate searching, filtering and feature extraction are described. Three training and testing sets are created in order to deal with the diverse timescales and diverse objects encountered when dealing with the gamma-ray sky. Three independent Random Forest are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the Transient Matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples.

  5. Partial discharge localization in power transformers based on the sequential quadratic programming-genetic algorithm adopting acoustic emission techniques

    Science.gov (United States)

    Liu, Hua-Long; Liu, Hua-Dong

    2014-10-01

    Partial discharge (PD) in power transformers is one of the prime reasons resulting in insulation degradation and power faults. Hence, it is of great importance to study the techniques of the detection and localization of PD in theory and practice. The detection and localization of PD employing acoustic emission (AE) techniques, as a kind of non-destructive testing, plus due to the advantages of powerful capability of locating and high precision, have been paid more and more attention. The localization algorithm is the key factor to decide the localization accuracy in AE localization of PD. Many kinds of localization algorithms exist for the PD source localization adopting AE techniques including intelligent and non-intelligent algorithms. However, the existed algorithms possess some defects such as the premature convergence phenomenon, poor local optimization ability and unsuitability for the field applications. To overcome the poor local optimization ability and easily caused premature convergence phenomenon of the fundamental genetic algorithm (GA), a new kind of improved GA is proposed, namely the sequence quadratic programming-genetic algorithm (SQP-GA). For the hybrid optimization algorithm, SQP-GA, the sequence quadratic programming (SQP) algorithm which is used as a basic operator is integrated into the fundamental GA, so the local searching ability of the fundamental GA is improved effectively and the premature convergence phenomenon is overcome. Experimental results of the numerical simulations of benchmark functions show that the hybrid optimization algorithm, SQP-GA, is better than the fundamental GA in the convergence speed and optimization precision, and the proposed algorithm in this paper has outstanding optimization effect. At the same time, the presented SQP-GA in the paper is applied to solve the ultrasonic localization problem of PD in transformers, then the ultrasonic localization method of PD in transformers based on the SQP-GA is proposed. And

  6. An alternative subspace approach to EEG dipole source localization

    Science.gov (United States)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  7. An alternative subspace approach to EEG dipole source localization

    International Nuclear Information System (INIS)

    Xu Xiaoliang; Xu, Bobby; He Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist

  8. An Efficient Local Algorithm for Distributed Multivariate Regression

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper offers a local distributed algorithm for multivariate regression in large peer-to-peer environments. The algorithm is designed for distributed...

  9. Source localization analysis using seismic noise data acquired in exploration geophysics

    Science.gov (United States)

    Roux, P.; Corciulo, M.; Campillo, M.; Dubuq, D.

    2011-12-01

    Passive monitoring using seismic noise data shows a growing interest at exploration scale. Recent studies demonstrated source localization capability using seismic noise cross-correlation at observation scales ranging from hundreds of kilometers to meters. In the context of exploration geophysics, classical localization methods using travel-time picking fail when no evident first arrivals can be detected. Likewise, methods based on the intensity decrease as a function of distance to the source also fail when the noise intensity decay gets more complicated than the power-law expected from geometrical spreading. We propose here an automatic procedure developed in ocean acoustics that permits to iteratively locate the dominant and secondary noise sources. The Matched-Field Processing (MFP) technique is based on the spatial coherence of raw noise signals acquired on a dense array of receivers in order to produce high-resolution source localizations. Standard MFP algorithms permits to locate the dominant noise source by matching the seismic noise Cross-Spectral Density Matrix (CSDM) with the equivalent CSDM calculated from a model and a surrogate source position that scans each position of a 3D grid below the array of seismic sensors. However, at exploration scale, the background noise is mostly dominated by surface noise sources related to human activities (roads, industrial platforms,..), which localization is of no interest for the monitoring of the hydrocarbon reservoir. In other words, the dominant noise sources mask lower-amplitude noise sources associated to the extraction process (in the volume). Their location is therefore difficult through standard MFP technique. The Multi-Rate Adaptative Beamforming (MRABF) is a further improvement of the MFP technique that permits to locate low-amplitude secondary noise sources using a projector matrix calculated from the eigen-value decomposition of the CSDM matrix. The MRABF approach aims at cancelling the contributions of

  10. Pollution source localization in an urban water supply network based on dynamic water demand.

    Science.gov (United States)

    Yan, Xuesong; Zhu, Zhixin; Li, Tian

    2017-10-27

    Urban water supply networks are susceptible to intentional, accidental chemical, and biological pollution, which pose a threat to the health of consumers. In recent years, drinking-water pollution incidents have occurred frequently, seriously endangering social stability and security. The real-time monitoring for water quality can be effectively implemented by placing sensors in the water supply network. However, locating the source of pollution through the data detection obtained by water quality sensors is a challenging problem. The difficulty lies in the limited number of sensors, large number of water supply network nodes, and dynamic user demand for water, which leads the pollution source localization problem to an uncertainty, large-scale, and dynamic optimization problem. In this paper, we mainly study the dynamics of the pollution source localization problem. Previous studies of pollution source localization assume that hydraulic inputs (e.g., water demand of consumers) are known. However, because of the inherent variability of urban water demand, the problem is essentially a fluctuating dynamic problem of consumer's water demand. In this paper, the water demand is considered to be stochastic in nature and can be described using Gaussian model or autoregressive model. On this basis, an optimization algorithm is proposed based on these two dynamic water demand change models to locate the pollution source. The objective of the proposed algorithm is to find the locations and concentrations of pollution sources that meet the minimum between the analogue and detection values of the sensor. Simulation experiments were conducted using two different sizes of urban water supply network data, and the experimental results were compared with those of the standard genetic algorithm.

  11. A Scalable Local Algorithm for Distributed Multivariate Regression

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper offers a local distributed algorithm for multivariate regression in large peer-to-peer environments. The algorithm can be used for distributed...

  12. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  13. Mobile robots for localizing gas emission sources on landfill sites: is bio-inspiration the way to go?

    Science.gov (United States)

    Hernandez Bennetts, Victor; Lilienthal, Achim J; Neumann, Patrick P; Trincavelli, Marco

    2011-01-01

    Roboticists often take inspiration from animals for designing sensors, actuators, or algorithms that control the behavior of robots. Bio-inspiration is motivated with the uncanny ability of animals to solve complex tasks like recognizing and manipulating objects, walking on uneven terrains, or navigating to the source of an odor plume. In particular the task of tracking an odor plume up to its source has nearly exclusively been addressed using biologically inspired algorithms and robots have been developed, for example, to mimic the behavior of moths, dung beetles, or lobsters. In this paper we argue that biomimetic approaches to gas source localization are of limited use, primarily because animals differ fundamentally in their sensing and actuation capabilities from state-of-the-art gas-sensitive mobile robots. To support our claim, we compare actuation and chemical sensing available to mobile robots to the corresponding capabilities of moths. We further characterize airflow and chemosensor measurements obtained with three different robot platforms (two wheeled robots and one flying micro-drone) in four prototypical environments and show that the assumption of a constant and unidirectional airflow, which is the basis of many gas source localization approaches, is usually far from being valid. This analysis should help to identify how underlying principles, which govern the gas source tracking behavior of animals, can be usefully "translated" into gas source localization approaches that fully take into account the capabilities of mobile robots. We also describe the requirements for a reference application, monitoring of gas emissions at landfill sites with mobile robots, and discuss an engineered gas source localization approach based on statistics as an alternative to biologically inspired algorithms.

  14. Mobile Robots for Localizing Gas Emission Sources on Landfill Sites: Is Bio-Inspiration the Way to Go?

    Directory of Open Access Journals (Sweden)

    Victor eHernandez Bennetts

    2012-01-01

    Full Text Available Roboticists often take inspiration from animals for designing sensors, actuators or algorithms that control the behaviour of robots. Bio-inspiration is motivated with the uncanny ability of animals to solve complex tasks like recognizing and manipulating objects, walking on uneven terrains, or navigating to the source of an odour plume. In particular the task of tracking an odour plume up to its source has nearly exclusively been addressed using biologically inspired algorithms and robots have been developed, for example, to mimic the behaviour of moths, dungbeetles, or lobsters. In this paper we argue that biomimetic approaches to gas source localization are of limited use, primarily because animals differ fundamentally in their sensing and actuation capabilities from state-of-the-art gas-sensitive mobile robots. To support our claim, we compare actuation and chemical sensing available to mobile robots to the corresponding capabilities of moths. We further characterize airflow and chemosensor measurements obtained with three different robot platforms (two wheeled robots and one flying micro drone in four prototypical environments and show that the assumption of a constant and unidirectional airflow, which is at the basis of many gas source localization approaches, is usually far from being valid. This analysis should help to identify how underlying principles, which govern the gas source tracking behaviour of animals, can be usefully translated into gas source localization approaches that fully take into account the capabilities of mobile robots. We also describe the requirements for a reference application, monitoring of gas emissions at landfill sites with mobile robots, and discuss an engineered gas source localization approach based on statistics as an alternative to biologically-inspired algorithms.

  15. An Algorithm for the Accurate Localization of Sounds

    National Research Council Canada - National Science Library

    MacDonald, Justin A

    2005-01-01

    .... The algorithm requires no a priori knowledge of the stimuli to be localized. The accuracy of the algorithm was tested using binaural recordings from a pair of microphones mounted in the ear canals of an acoustic mannequin...

  16. Document localization algorithms based on feature points and straight lines

    Science.gov (United States)

    Skoryukina, Natalya; Shemiakina, Julia; Arlazarov, Vladimir L.; Faradjev, Igor

    2018-04-01

    The important part of the system of a planar rectangular object analysis is the localization: the estimation of projective transform from template image of an object to its photograph. The system also includes such subsystems as the selection and recognition of text fields, the usage of contexts etc. In this paper three localization algorithms are described. All algorithms use feature points and two of them also analyze near-horizontal and near- vertical lines on the photograph. The algorithms and their combinations are tested on a dataset of real document photographs. Also the method of localization quality estimation is proposed that allows configuring the localization subsystem independently of the other subsystems quality.

  17. An Algorithm Computing the Local $b$ Function by an Approximate Division Algorithm in $\\hat{\\mathcal{D}}$

    OpenAIRE

    Nakayama, Hiromasa

    2006-01-01

    We give an algorithm to compute the local $b$ function. In this algorithm, we use the Mora division algorithm in the ring of differential operators and an approximate division algorithm in the ring of differential operators with power series coefficient.

  18. Genetic local search algorithm for optimization design of diffractive optical elements.

    Science.gov (United States)

    Zhou, G; Chen, Y; Wang, Z; Song, H

    1999-07-10

    We propose a genetic local search algorithm (GLSA) for the optimization design of diffractive optical elements (DOE's). This hybrid algorithm incorporates advantages of both genetic algorithm (GA) and local search techniques. It appears better able to locate the global minimum compared with a canonical GA. Sample cases investigated here include the optimization design of binary-phase Dammann gratings, continuous surface-relief grating array generators, and a uniform top-hat focal plane intensity profile generator. Two GLSA's whose incorporated local search techniques are the hill-climbing method and the simulated annealing algorithm are investigated. Numerical experimental results demonstrate that the proposed algorithm is highly efficient and robust. DOE's that have high diffraction efficiency and excellent uniformity can be achieved by use of the algorithm we propose.

  19. Adaptive local backlight dimming algorithm based on local histogram and image characteristics

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Burini, Nino; Korhonen, Jari

    2013-01-01

    -off between power consumption and image quality preservation than the other algorithms representing the state of the art among feature based backlight algorithms. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.......Liquid Crystal Display (LCDs) with Light Emitting Diode (LED) backlight is a very popular display technology, used for instance in television sets, monitors and mobile phones. This paper presents a new backlight dimming algorithm that exploits the characteristics of the target image......, such as the local histograms and the average pixel intensity of each backlight segment, to reduce the power consumption of the backlight and enhance image quality. The local histogram of the pixels within each backlight segment is calculated and, based on this average, an adaptive quantile value is extracted...

  20. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    Science.gov (United States)

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  1. Extended SVM algorithms for multilevel trans-Z-source inverter

    Directory of Open Access Journals (Sweden)

    Aida Baghbany Oskouei

    2016-03-01

    Full Text Available This paper suggests extended algorithms for multilevel trans-Z-source inverter. These algorithms are based on space vector modulation (SVM, which works with high switching frequency and does not generate the mean value of the desired load voltage in every switching interval. In this topology the output voltage is not limited to dc voltage source similar to traditional cascaded multilevel inverter and can be increased with trans-Z-network shoot-through state control. Besides, it is more reliable against short circuit, and due to several number of dc sources in each phase of this topology, it is possible to use it in hybrid renewable energy. Proposed SVM algorithms include the following: Combined modulation algorithm (SVPWM and shoot-through implementation in dwell times of voltage vectors algorithm. These algorithms are compared from viewpoint of simplicity, accuracy, number of switching, and THD. Simulation and experimental results are presented to demonstrate the expected representations.

  2. Ambiguity Resolution for Phase-Based 3-D Source Localization under Fixed Uniform Circular Array.

    Science.gov (United States)

    Chen, Xin; Liu, Zhen; Wei, Xizhang

    2017-05-11

    Under fixed uniform circular array (UCA), 3-D parameter estimation of a source whose half-wavelength is smaller than the array aperture would suffer from a serious phase ambiguity problem, which also appears in a recently proposed phase-based algorithm. In this paper, by using the centro-symmetry of UCA with an even number of sensors, the source's angles and range can be decoupled and a novel algorithm named subarray grouping and ambiguity searching (SGAS) is addressed to resolve angle ambiguity. In the SGAS algorithm, each subarray formed by two couples of centro-symmetry sensors can obtain a batch of results under different ambiguities, and by searching the nearest value among subarrays, which is always corresponding to correct ambiguity, rough angle estimation with no ambiguity is realized. Then, the unambiguous angles are employed to resolve phase ambiguity in a phase-based 3-D parameter estimation algorithm, and the source's range, as well as more precise angles, can be achieved. Moreover, to improve the practical performance of SGAS, the optimal structure of subarrays and subarray selection criteria are further investigated. Simulation results demonstrate the satisfying performance of the proposed method in 3-D source localization.

  3. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.

    Science.gov (United States)

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-06-12

    Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  4. Brain source localization using a fourth-order deflation scheme

    Science.gov (United States)

    Albera, Laurent; Ferréol, Anne; Cosandier-Rimélé, Delphine; Merlet, Isabel; Wendling, Fabrice

    2008-01-01

    A high resolution method for solving potentially ill-posed inverse problems is proposed. This method named FO-D-MUSIC allows for localization of brain current sources with unconstrained orientations from surface electro- or magnetoencephalographic data using spherical or realistic head geometries. The FO-D-MUSIC method is based on i) the separability of the data transfer matrix as a function of location and orientation parameters, ii) the Fourth Order (FO) virtual array theory, and iii) the deflation concept extended to FO statistics accounting for the presence of potentially but not completely statistically dependent sources. Computer results display the superiority of the FO-D-MUSIC approach in different situations (very closed sources, small number of electrodes, additive Gaussian noise with unknown spatial covariance, …) compared to classical algorithms. PMID:18269984

  5. Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments

    Directory of Open Access Journals (Sweden)

    Gianluca Gennarelli

    2017-10-01

    Full Text Available Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS and non-line of sight (NLOS conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.

  6. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.

    Science.gov (United States)

    Cheng, Jing; Xia, Linyuan

    2016-08-31

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.

  7. GPS-Free Localization Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2010-06-01

    Full Text Available Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time.

  8. The optimal algorithm for Multi-source RS image fusion.

    Science.gov (United States)

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  9. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing

    Directory of Open Access Journals (Sweden)

    Jiayin Liu

    2017-06-01

    Full Text Available Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC, which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF, which is estimated by Kernel Density Estimation (KDE with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  10. Engineering local optimality in quantum Monte Carlo algorithms

    Science.gov (United States)

    Pollet, Lode; Van Houcke, Kris; Rombouts, Stefan M. A.

    2007-08-01

    Quantum Monte Carlo algorithms based on a world-line representation such as the worm algorithm and the directed loop algorithm are among the most powerful numerical techniques for the simulation of non-frustrated spin models and of bosonic models. Both algorithms work in the grand-canonical ensemble and can have a winding number larger than zero. However, they retain a lot of intrinsic degrees of freedom which can be used to optimize the algorithm. We let us guide by the rigorous statements on the globally optimal form of Markov chain Monte Carlo simulations in order to devise a locally optimal formulation of the worm algorithm while incorporating ideas from the directed loop algorithm. We provide numerical examples for the soft-core Bose-Hubbard model and various spin- S models.

  11. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junjie Ma

    2018-02-01

    Full Text Available Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  12. Fast weighted centroid algorithm for single particle localization near the information limit.

    Science.gov (United States)

    Fish, Jeremie; Scrimgeour, Jan

    2015-07-10

    A simple weighting scheme that enhances the localization precision of center of mass calculations for radially symmetric intensity distributions is presented. The algorithm effectively removes the biasing that is common in such center of mass calculations. Localization precision compares favorably with other localization algorithms used in super-resolution microscopy and particle tracking, while significantly reducing the processing time and memory usage. We expect that the algorithm presented will be of significant utility when fast computationally lightweight particle localization or tracking is desired.

  13. Ambiguity resolving based on cosine property of phase differences for 3D source localization with uniform circular array

    Science.gov (United States)

    Chen, Xin; Wang, Shuhong; Liu, Zhen; Wei, Xizhang

    2017-07-01

    Localization of a source whose half-wavelength is smaller than the array aperture would suffer from serious phase ambiguity problem, which also appears in recently proposed phase-based algorithms. In this paper, by using the centro-symmetry of fixed uniform circular array (UCA) with even number of sensors, the source's angles and range can be decoupled and a novel ambiguity resolving approach is addressed for phase-based algorithms of source's 3-D localization (azimuth angle, elevation angle, and range). In the proposed method, by using the cosine property of unambiguous phase differences, ambiguity searching and actual-value matching are first employed to obtain actual phase differences and corresponding source's angles. Then, the unambiguous angles are utilized to estimate the source's range based on a one dimension multiple signal classification (1-D MUSIC) estimator. Finally, simulation experiments investigate the influence of step size in search and SNR on performance of ambiguity resolution and demonstrate the satisfactory estimation performance of the proposed method.

  14. ISINA: INTEGRAL Source Identification Network Algorithm

    Science.gov (United States)

    Scaringi, S.; Bird, A. J.; Clark, D. J.; Dean, A. J.; Hill, A. B.; McBride, V. A.; Shaw, S. E.

    2008-11-01

    We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using random forests, is applied to the IBIS/ISGRI data set in order to ease the production of unbiased future soft gamma-ray source catalogues. First, we introduce the data set and the problems encountered when dealing with images obtained using the coded mask technique. The initial step of source candidate searching is introduced and an initial candidate list is created. A description of the feature extraction on the initial candidate list is then performed together with feature merging for these candidates. Three training and testing sets are created in order to deal with the diverse time-scales encountered when dealing with the gamma-ray sky. Three independent random forests are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the transient matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain), Czech Republic and Poland, and the participation of Russia and the USA. E-mail: simo@astro.soton.ac.uk

  15. Local simulation algorithms for Coulombic interactions

    Indian Academy of Sciences (India)

    We consider a problem in dynamically constrained Monte Carlo dynamics and show that this leads to the generation of long ranged effective interactions. This allows us to construct a local algorithm for the simulation of charged systems without ever having to evaluate pair potentials or solve the Poisson equation.

  16. A space-efficient algorithm for local similarities.

    Science.gov (United States)

    Huang, X Q; Hardison, R C; Miller, W

    1990-10-01

    Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.

  17. On the influence of microphone array geometry on HRTF-based Sound Source Localization

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    The direction dependence of Head Related Transfer Functions (HRTFs) forms the basis for HRTF-based Sound Source Localization (SSL) algorithms. In this paper, we show how spectral similarities of the HRTFs of different directions in the horizontal plane influence performance of HRTF-based SSL...... algorithms; the more similar the HRTFs of different angles to the HRTF of the target angle, the worse the performance. However, we also show how the microphone array geometry can assist in differentiating between the HRTFs of the different angles, thereby improving performance of HRTF-based SSL algorithms....... Furthermore, to demonstrate the analysis results, we show the impact of HRTFs similarities and microphone array geometry on an exemplary HRTF-based SSL algorithm, called MLSSL. This algorithm is well-suited for this purpose as it allows to estimate the Direction-of-Arrival (DoA) of the target sound using any...

  18. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    Science.gov (United States)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-10-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.

  19. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    International Nuclear Information System (INIS)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-01-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG

  20. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish

    Science.gov (United States)

    Jun, James Jaeyoon; Longtin, André; Maler, Leonard

    2013-01-01

    In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source

  1. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish.

    Directory of Open Access Journals (Sweden)

    James Jaeyoon Jun

    Full Text Available In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal's positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole

  2. Study on Data Clustering and Intelligent Decision Algorithm of Indoor Localization

    Science.gov (United States)

    Liu, Zexi

    2018-01-01

    Indoor positioning technology enables the human beings to have the ability of positional perception in architectural space, and there is a shortage of single network coverage and the problem of location data redundancy. So this article puts forward the indoor positioning data clustering algorithm and intelligent decision-making research, design the basic ideas of multi-source indoor positioning technology, analyzes the fingerprint localization algorithm based on distance measurement, position and orientation of inertial device integration. By optimizing the clustering processing of massive indoor location data, the data normalization pretreatment, multi-dimensional controllable clustering center and multi-factor clustering are realized, and the redundancy of locating data is reduced. In addition, the path is proposed based on neural network inference and decision, design the sparse data input layer, the dynamic feedback hidden layer and output layer, low dimensional results improve the intelligent navigation path planning.

  3. A dynamic global and local combined particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Jiao Bin; Lian Zhigang; Chen Qunxian

    2009-01-01

    Particle swarm optimization (PSO) algorithm has been developing rapidly and many results have been reported. PSO algorithm has shown some important advantages by providing high speed of convergence in specific problems, but it has a tendency to get stuck in a near optimal solution and one may find it difficult to improve solution accuracy by fine tuning. This paper presents a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm to improve the performance of original PSO, in which all particles dynamically share the best information of the local particle, global particle and group particles. It is tested with a set of eight benchmark functions with different dimensions and compared with original PSO. Experimental results indicate that the DGLCPSO algorithm improves the search performance on the benchmark functions significantly, and shows the effectiveness of the algorithm to solve optimization problems.

  4. Computationally efficient near-field source localization using third-order moments

    Science.gov (United States)

    Chen, Jian; Liu, Guohong; Sun, Xiaoying

    2014-12-01

    In this paper, a third-order moment-based estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm is proposed for passive localization of near-field sources. By properly choosing sensor outputs of the symmetric uniform linear array, two special third-order moment matrices are constructed, in which the steering matrix is the function of electric angle γ, while the rotational factor is the function of electric angles γ and ϕ. With the singular value decomposition (SVD) operation, all direction-of-arrivals (DOAs) are estimated from a polynomial rooting version. After substituting the DOA information into the steering matrix, the rotational factor is determined via the total least squares (TLS) version, and the related range estimations are performed. Compared with the high-order ESPRIT method, the proposed algorithm requires a lower computational burden, and it avoids the parameter-match procedure. Computer simulations are carried out to demonstrate the performance of the proposed algorithm.

  5. Blind Source Separation Based on Covariance Ratio and Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Lei Chen

    2014-01-01

    Full Text Available The computation amount in blind source separation based on bioinspired intelligence optimization is high. In order to solve this problem, we propose an effective blind source separation algorithm based on the artificial bee colony algorithm. In the proposed algorithm, the covariance ratio of the signals is utilized as the objective function and the artificial bee colony algorithm is used to solve it. The source signal component which is separated out, is then wiped off from mixtures using the deflation method. All the source signals can be recovered successfully by repeating the separation process. Simulation experiments demonstrate that significant improvement of the computation amount and the quality of signal separation is achieved by the proposed algorithm when compared to previous algorithms.

  6. A Dedicated Genetic Algorithm for Localization of Moving Magnetic Objects

    Directory of Open Access Journals (Sweden)

    Roger Alimi

    2015-09-01

    Full Text Available A dedicated Genetic Algorithm (GA has been developed to localize the trajectory of ferromagnetic moving objects within a bounded perimeter. Localization of moving ferromagnetic objects is an important tool because it can be employed in situations when the object is obscured. This work is innovative for two main reasons: first, the GA has been tuned to provide an accurate and fast solution to the inverse magnetic field equations problem. Second, the algorithm has been successfully tested using real-life experimental data. Very accurate trajectory localization estimations were obtained over a wide range of scenarios.

  7. A probabilistic framework for acoustic emission source localization in plate-like structures

    International Nuclear Information System (INIS)

    Dehghan Niri, E; Salamone, S

    2012-01-01

    This paper proposes a probabilistic approach for acoustic emission (AE) source localization in isotropic plate-like structures based on an extended Kalman filter (EKF). The proposed approach consists of two main stages. During the first stage, time-of-flight (TOF) measurements of Lamb waves are carried out by a continuous wavelet transform (CWT), accounting for systematic errors due to the Heisenberg uncertainty; the second stage uses an EKF to iteratively estimate the AE source location and the wave velocity. The advantages of the proposed algorithm over the traditional methods include the capability of: (1) taking into account uncertainties in TOF measurements and wave velocity and (2) efficiently fusing multi-sensor data to perform AE source localization. The performance of the proposed approach is validated through pencil-lead breaks performed on an aluminum plate at systematic grid locations. The plate was instrumented with an array of four piezoelectric transducers in two different configurations. (paper)

  8. Improving oncoplastic breast tumor bed localization for radiotherapy planning using image registration algorithms

    Science.gov (United States)

    Wodzinski, Marek; Skalski, Andrzej; Ciepiela, Izabela; Kuszewski, Tomasz; Kedzierawski, Piotr; Gajda, Janusz

    2018-02-01

    Knowledge about tumor bed localization and its shape analysis is a crucial factor for preventing irradiation of healthy tissues during supportive radiotherapy and as a result, cancer recurrence. The localization process is especially hard for tumors placed nearby soft tissues, which undergo complex, nonrigid deformations. Among them, breast cancer can be considered as the most representative example. A natural approach to improving tumor bed localization is the use of image registration algorithms. However, this involves two unusual aspects which are not common in typical medical image registration: the real deformation field is discontinuous, and there is no direct correspondence between the cancer and its bed in the source and the target 3D images respectively. The tumor no longer exists during radiotherapy planning. Therefore, a traditional evaluation approach based on known, smooth deformations and target registration error are not directly applicable. In this work, we propose alternative artificial deformations which model the tumor bed creation process. We perform a comprehensive evaluation of the most commonly used deformable registration algorithms: B-Splines free form deformations (B-Splines FFD), different variants of the Demons and TV-L1 optical flow. The evaluation procedure includes quantitative assessment of the dedicated artificial deformations, target registration error calculation, 3D contour propagation and medical experts visual judgment. The results demonstrate that the currently, practically applied image registration (rigid registration and B-Splines FFD) are not able to correctly reconstruct discontinuous deformation fields. We show that the symmetric Demons provide the most accurate soft tissues alignment in terms of the ability to reconstruct the deformation field, target registration error and relative tumor volume change, while B-Splines FFD and TV-L1 optical flow are not an appropriate choice for the breast tumor bed localization problem

  9. Node localization algorithm of wireless sensor networks for large electrical equipment monitoring application

    DEFF Research Database (Denmark)

    Chen, Qinyin; Hu, Y.; Chen, Zhe

    2016-01-01

    Node localization technology is an important technology for the Wireless Sensor Networks (WSNs) applications. An improved 3D node localization algorithm is proposed in this paper, which is based on a Multi-dimensional Scaling (MDS) node localization algorithm for large electrical equipment monito...

  10. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  11. Theory and Algorithms for Global/Local Design Optimization

    National Research Council Canada - National Science Library

    Watson, Layne T; Guerdal, Zafer; Haftka, Raphael T

    2005-01-01

    The motivating application for this research is the global/local optimal design of composite aircraft structures such as wings and fuselages, but the theory and algorithms are more widely applicable...

  12. Gas Source Localization via Behaviour Based Mobile Robot and Weighted Arithmetic Mean

    Science.gov (United States)

    Yeon, Ahmad Shakaff Ali; Kamarudin, Kamarulzaman; Visvanathan, Retnam; Mamduh Syed Zakaria, Syed Muhammad; Zakaria, Ammar; Munirah Kamarudin, Latifah

    2018-03-01

    This work is concerned with the localization of gas source in dynamic indoor environment using a single mobile robot system. Algorithms such as Braitenberg, Zig-Zag and the combination of the two were implemented on the mobile robot as gas plume searching and tracing behaviours. To calculate the gas source location, a weighted arithmetic mean strategy was used. All experiments were done on an experimental testbed consisting of a large gas sensor array (LGSA) to monitor real-time gas concentration within the testbed. Ethanol gas was released within the testbed and the source location was marked using a pattern that can be tracked by a pattern tracking system. A pattern template was also mounted on the mobile robot to track the trajectory of the mobile robot. Measurements taken by the mobile robot and the LGSA were then compared to verify the experiments. A combined total of 36.5 hours of real time experimental runs were done and the typical results from such experiments were presented in this paper. From the results, we obtained gas source localization errors between 0.4m to 1.2m from the real source location.

  13. A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.

    Science.gov (United States)

    Li, Bing; Cui, Wei; Wang, Bin

    2015-09-16

    Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.

  14. A new PWM algorithm for battery-source three-phase inverters

    Energy Technology Data Exchange (ETDEWEB)

    Chan, C.C. (Dept. of Electrical and Electronic Engineering, Univ. of Hong Kong, Pokfulam Road (HK)); Chau, K.T. (Dept. of Electrical Engineering, Hong Kong Polytechnic, Hung Hom (HK))

    1991-01-01

    A new PWM algorithm for battery-source three-phase inverters is described in this paper. The concept of the algorithm is to determine the pulsewidths by equating the areas of the segments of the sinusodial reference with the related output pulse areas. The algorithm is particularly suitable to handle a non-constant voltage source with good harmonic suppression. Since the pulsewidths are computable in real time with minimal storage requirement as well as compact hardware and software, it is especially suitable for single-chip microcomputer implementation. Experimental results show that the single-chip microcomputer Intel 8095-based battery-source inverter can control a 3 kW synchronous motor drive satisfactorily over a frequency range of 2 to 100Hz.

  15. A fingerprint classification algorithm based on combination of local and global information

    Science.gov (United States)

    Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu

    2011-12-01

    Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.

  16. Multiobjective memetic estimation of distribution algorithm based on an incremental tournament local searcher.

    Science.gov (United States)

    Yang, Kaifeng; Mu, Li; Yang, Dongdong; Zou, Feng; Wang, Lei; Jiang, Qiaoyong

    2014-01-01

    A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.

  17. Multiobjective Memetic Estimation of Distribution Algorithm Based on an Incremental Tournament Local Searcher

    Directory of Open Access Journals (Sweden)

    Kaifeng Yang

    2014-01-01

    Full Text Available A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.

  18. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    Science.gov (United States)

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  19. A block matching-based registration algorithm for localization of locally advanced lung tumors

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D., E-mail: gdhugo@vcu.edu [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia, 23298 (United States)

    2014-04-15

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm{sup 3}), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0

  20. A block matching-based registration algorithm for localization of locally advanced lung tumors

    International Nuclear Information System (INIS)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2014-01-01

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm 3 ), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0.001). Left

  1. A Hybrid DV-Hop Algorithm Using RSSI for Localization in Large-Scale Wireless Sensor Networks.

    Science.gov (United States)

    Cheikhrouhou, Omar; M Bhatti, Ghulam; Alroobaea, Roobaea

    2018-05-08

    With the increasing realization of the Internet-of-Things (IoT) and rapid proliferation of wireless sensor networks (WSN), estimating the location of wireless sensor nodes is emerging as an important issue. Traditional ranging based localization algorithms use triangulation for estimating the physical location of only those wireless nodes that are within one-hop distance from the anchor nodes. Multi-hop localization algorithms, on the other hand, aim at localizing the wireless nodes that can physically be residing at multiple hops away from anchor nodes. These latter algorithms have attracted a growing interest from research community due to the smaller number of required anchor nodes. One such algorithm, known as DV-Hop (Distance Vector Hop), has gained popularity due to its simplicity and lower cost. However, DV-Hop suffers from reduced accuracy due to the fact that it exploits only the network topology (i.e., number of hops to anchors) rather than the distances between pairs of nodes. In this paper, we propose an enhanced DV-Hop localization algorithm that also uses the RSSI values associated with links between one-hop neighbors. Moreover, we exploit already localized nodes by promoting them to become additional anchor nodes. Our simulations have shown that the proposed algorithm significantly outperforms the original DV-Hop localization algorithm and two of its recently published variants, namely RSSI Auxiliary Ranging and the Selective 3-Anchor DV-hop algorithm. More precisely, in some scenarios, the proposed algorithm improves the localization accuracy by almost 95%, 90% and 70% as compared to the basic DV-Hop, Selective 3-Anchor, and RSSI DV-Hop algorithms, respectively.

  2. A Separation Algorithm for Sources with Temporal Structure Only Using Second-order Statistics

    Directory of Open Access Journals (Sweden)

    J.G. Wang

    2013-09-01

    Full Text Available Unlike conventional blind source separation (BSS deals with independent identically distributed (i.i.d. sources, this paper addresses the separation from mixtures of sources with temporal structure, such as linear autocorrelations. Many sequential extraction algorithms have been reported, resulting in inevitable cumulated errors introduced by the deflation scheme. We propose a robust separation algorithm to recover original sources simultaneously, through a joint diagonalizer of several average delayed covariance matrices at positions of the optimal time delay and its integers. The proposed algorithm is computationally simple and efficient, since it is based on the second-order statistics only. Extensive simulation results confirm the validity and high performance of the algorithm. Compared with related extraction algorithms, its separation signal-to-noise rate for a desired source can reach 20dB higher, and it seems rather insensitive to the estimation error of the time delay.

  3. TSaT-MUSIC: a novel algorithm for rapid and accurate ultrasonic 3D localization

    Science.gov (United States)

    Mizutani, Kyohei; Ito, Toshio; Sugimoto, Masanori; Hashizume, Hiromichi

    2011-12-01

    We describe a fast and accurate indoor localization technique using the multiple signal classification (MUSIC) algorithm. The MUSIC algorithm is known as a high-resolution method for estimating directions of arrival (DOAs) or propagation delays. A critical problem in using the MUSIC algorithm for localization is its computational complexity. Therefore, we devised a novel algorithm called Time Space additional Temporal-MUSIC, which can rapidly and simultaneously identify DOAs and delays of mul-ticarrier ultrasonic waves from transmitters. Computer simulations have proved that the computation time of the proposed algorithm is almost constant in spite of increasing numbers of incoming waves and is faster than that of existing methods based on the MUSIC algorithm. The robustness of the proposed algorithm is discussed through simulations. Experiments in real environments showed that the standard deviation of position estimations in 3D space is less than 10 mm, which is satisfactory for indoor localization.

  4. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    International Nuclear Information System (INIS)

    Chen, Ming; Yu, Hengyong

    2015-01-01

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units

  5. DNA evolutionary algorithm (DNAEA) for source term identification in convection-diffusion equation

    International Nuclear Information System (INIS)

    Yang, X-H; Hu, X-X; Shen, Z-Y

    2008-01-01

    The source identification problem is changed into an optimization problem in this paper. This is a complicated nonlinear optimization problem. It is very intractable with traditional optimization methods. So DNA evolutionary algorithm (DNAEA) is presented to solve the discussed problem. In this algorithm, an initial population is generated by a chaos algorithm. With the shrinking of searching range, DNAEA gradually directs to an optimal result with excellent individuals obtained by DNAEA. The position and intensity of pollution source are well found with DNAEA. Compared with Gray-coded genetic algorithm and pure random search algorithm, DNAEA has rapider convergent speed and higher calculation precision

  6. Nonlinear estimation-based dipole source localization for artificial lateral line systems

    International Nuclear Information System (INIS)

    Abdulsadda, Ahmad T; Tan Xiaobo

    2013-01-01

    As a flow-sensing organ, the lateral line system plays an important role in various behaviors of fish. An engineering equivalent of a biological lateral line is of great interest to the navigation and control of underwater robots and vehicles. A vibrating sphere, also known as a dipole source, can emulate the rhythmic movement of fins and body appendages, and has been widely used as a stimulus in the study of biological lateral lines. Dipole source localization has also become a benchmark problem in the development of artificial lateral lines. In this paper we present two novel iterative schemes, referred to as Gauss–Newton (GN) and Newton–Raphson (NR) algorithms, for simultaneously localizing a dipole source and estimating its vibration amplitude and orientation, based on the analytical model for a dipole-generated flow field. The performance of the GN and NR methods is first confirmed with simulation results and the Cramer–Rao bound (CRB) analysis. Experiments are further conducted on an artificial lateral line prototype, consisting of six millimeter-scale ionic polymer–metal composite sensors with intra-sensor spacing optimized with CRB analysis. Consistent with simulation results, the experimental results show that both GN and NR schemes are able to simultaneously estimate the source location, vibration amplitude and orientation with comparable precision. Specifically, the maximum localization error is less than 5% of the body length (BL) when the source is within the distance of one BL. Experimental results have also shown that the proposed schemes are superior to the beamforming method, one of the most competitive approaches reported in literature, in terms of accuracy and computational efficiency. (paper)

  7. Linear Time Local Approximation Algorithm for Maximum Stable Marriage

    Directory of Open Access Journals (Sweden)

    Zoltán Király

    2013-08-01

    Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.

  8. Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.

    Science.gov (United States)

    Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2018-02-15

    Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Multi-hop localization algorithm based on grid-scanning for wireless sensor networks.

    Science.gov (United States)

    Wan, Jiangwen; Guo, Xiaolei; Yu, Ning; Wu, Yinfeng; Feng, Renjian

    2011-01-01

    For large-scale wireless sensor networks (WSNs) with a minority of anchor nodes, multi-hop localization is a popular scheme for determining the geographical positions of the normal nodes. However, in practice existing multi-hop localization methods suffer from various kinds of problems, such as poor adaptability to irregular topology, high computational complexity, low positioning accuracy, etc. To address these issues in this paper, we propose a novel Multi-hop Localization algorithm based on Grid-Scanning (MLGS). First, the factors that influence the multi-hop distance estimation are studied and a more realistic multi-hop localization model is constructed. Then, the feasible regions of the normal nodes are determined according to the intersection of bounding square rings. Finally, a verifiably good approximation scheme based on grid-scanning is developed to estimate the coordinates of the normal nodes. Additionally, the positioning accuracy of the normal nodes can be improved through neighbors' collaboration. Extensive simulations are performed in isotropic and anisotropic networks. The comparisons with some typical algorithms of node localization confirm the effectiveness and efficiency of our algorithm.

  10. A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks.

    Science.gov (United States)

    Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek; You, Ilsun

    2017-11-29

    As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks' survivability, in terms of anti-interference, network energy saving, etc.

  11. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  12. Source localization of rhythmic ictal EEG activity

    DEFF Research Database (Denmark)

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana

    2013-01-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal...... EEG activity using a distributed source model....

  13. Iterative Object Localization Algorithm Using Visual Images with a Reference Coordinate

    Directory of Open Access Journals (Sweden)

    We-Duke Cho

    2008-09-01

    Full Text Available We present a simplified algorithm for localizing an object using multiple visual images that are obtained from widely used digital imaging devices. We use a parallel projection model which supports both zooming and panning of the imaging devices. Our proposed algorithm is based on a virtual viewable plane for creating a relationship between an object position and a reference coordinate. The reference point is obtained from a rough estimate which may be obtained from the preestimation process. The algorithm minimizes localization error through the iterative process with relatively low-computational complexity. In addition, nonlinearity distortion of the digital image devices is compensated during the iterative process. Finally, the performances of several scenarios are evaluated and analyzed in both indoor and outdoor environments.

  14. A Study on Improvement of Algorithm for Source Term Evaluation

    International Nuclear Information System (INIS)

    Park, Jeong Ho; Park, Do Hyung; Lee, Jae Hee

    2010-03-01

    The program developed by KAERI for source term assessment of radwastes from the advanced nuclear fuel cycle consists of spent fuel database analysis module, spent fuel arising projection module, and automatic characterization module for radwastes from pyroprocess. To improve the algorithm adopted the developed program, following items were carried out: - development of an algorithm to decrease analysis time for spent fuel database - development of setup routine for a analysis procedure - improvement of interface for spent fuel arising projection module - optimization of data management algorithm needed for massive calculation to estimate source terms of radwastes from advanced fuel cycle The program developed through this study has a capability to perform source term estimation although several spent fuel assemblies with different fuel design, initial enrichment, irradiation history, discharge burnup, and cooling time are processed at the same time in the pyroprocess. It is expected that this program will be very useful for the design of unit process of pyroprocess and disposal system

  15. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    Science.gov (United States)

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  16. A Relative-Localization Algorithm Using Incomplete Pairwise Distance Measurements for Underwater Applications

    Directory of Open Access Journals (Sweden)

    Kae Y. Foo

    2010-01-01

    Full Text Available The task of localizing underwater assets involves the relative localization of each unit using only pairwise distance measurements, usually obtained from time-of-arrival or time-delay-of-arrival measurements. In the fluctuating underwater environment, a complete set of pair-wise distance measurements can often be difficult to acquire, thus hindering a straightforward closed-form solution in deriving the assets' relative coordinates. An iterative multidimensional scaling approach is presented based upon a weighted-majorization algorithm that tolerates missing or inaccurate distance measurements. Substantial modifications are proposed to optimize the algorithm, while the effects of refractive propagation paths are considered. A parametric study of the algorithm based upon simulation results is shown. An acoustic field-trial was then carried out, presenting field measurements to highlight the practical implementation of this algorithm.

  17. Extension to HiRLoc Algorithm for Localization Error Computation in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Swati Saxena

    2013-09-01

    Full Text Available Wireless sensor networks (WSNs have gained importance in recent years as this support a large spectrum of applications such as automotive, health, military, environmental, home and office. Various algorithms have been proposed for making this technology more adaptive the existing algorithms address issues such as safety, security, power consumption, lifetime and localization. This paper presents an extension to HiRLoc algorithm and highlights its benefits. Extended HiRLoc significantly reduce the average localization error by suggesting a new method directional antenna based scheme.

  18. Sequential Uniformly Reweighted Sum-Product Algorithm for Cooperative Localization in Wireless Networks

    OpenAIRE

    Li, Wei; Yang, Zhen; Hu, Haifeng

    2014-01-01

    Graphical models have been widely applied in solving distributed inference problems in wireless networks. In this paper, we formulate the cooperative localization problem in a mobile network as an inference problem on a factor graph. Using a sequential schedule of message updates, a sequential uniformly reweighted sum-product algorithm (SURW-SPA) is developed for mobile localization problems. The proposed algorithm combines the distributed nature of belief propagation (BP) with the improved p...

  19. RDEL: Restart Differential Evolution algorithm with Local Search Mutation for global numerical optimization

    Directory of Open Access Journals (Sweden)

    Ali Wagdy Mohamed

    2014-11-01

    Full Text Available In this paper, a novel version of Differential Evolution (DE algorithm based on a couple of local search mutation and a restart mechanism for solving global numerical optimization problems over continuous space is presented. The proposed algorithm is named as Restart Differential Evolution algorithm with Local Search Mutation (RDEL. In RDEL, inspired by Particle Swarm Optimization (PSO, a novel local mutation rule based on the position of the best and the worst individuals among the entire population of a particular generation is introduced. The novel local mutation scheme is joined with the basic mutation rule through a linear decreasing function. The proposed local mutation scheme is proven to enhance local search tendency of the basic DE and speed up the convergence. Furthermore, a restart mechanism based on random mutation scheme and a modified Breeder Genetic Algorithm (BGA mutation scheme is combined to avoid stagnation and/or premature convergence. Additionally, an exponent increased crossover probability rule and a uniform scaling factors of DE are introduced to promote the diversity of the population and to improve the search process, respectively. The performance of RDEL is investigated and compared with basic differential evolution, and state-of-the-art parameter adaptive differential evolution variants. It is discovered that the proposed modifications significantly improve the performance of DE in terms of quality of solution, efficiency and robustness.

  20. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    Science.gov (United States)

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  1. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    Science.gov (United States)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  2. A voting-based star identification algorithm utilizing local and global distribution

    Science.gov (United States)

    Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua

    2018-03-01

    A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.

  3. Bio-inspired UAV routing, source localization, and acoustic signature classification for persistent surveillance

    Science.gov (United States)

    Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien

    2011-06-01

    A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA

  4. COM-LOC: A Distributed Range-Free Localization Algorithm in Wireless Networks

    NARCIS (Netherlands)

    Dil, B.J.; Havinga, Paul J.M.; Marusic, S; Palaniswami, M; Gubbi, J.; Law, Y.W.

    2009-01-01

    This paper investigates distributed range-free localization in wireless networks using a communication protocol called sum-dist which is commonly employed by localization algorithms. With this protocol, the reference nodes flood the network in order to estimate the shortest distance between the

  5. A practical algorithm for distribution state estimation including renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)

    2009-11-15

    Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)

  6. Partial differential equation-based localization of a monopole source from a circular array.

    Science.gov (United States)

    Ando, Shigeru; Nara, Takaaki; Levy, Tsukassa

    2013-10-01

    Wave source localization from a sensor array has long been the most active research topics in both theory and application. In this paper, an explicit and time-domain inversion method for the direction and distance of a monopole source from a circular array is proposed. The approach is based on a mathematical technique, the weighted integral method, for signal/source parameter estimation. It begins with an exact form of the source-constraint partial differential equation that describes the unilateral propagation of wide-band waves from a single source, and leads to exact algebraic equations that include circular Fourier coefficients (phase mode measurements) as their coefficients. From them, nearly closed-form, single-shot and multishot algorithms are obtained that is suitable for use with band-pass/differential filter banks. Numerical evaluation and several experimental results obtained using a 16-element circular microphone array are presented to verify the validity of the proposed method.

  7. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    International Nuclear Information System (INIS)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K.

    2015-01-01

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of a random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.

  8. DATA SECURITY IN LOCAL AREA NETWORK BASED ON FAST ENCRYPTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    G. Ramesh

    2010-06-01

    Full Text Available Hacking is one of the greatest problems in the wireless local area networks. Many algorithms have been used to prevent the outside attacks to eavesdrop or prevent the data to be transferred to the end-user safely and correctly. In this paper, a new symmetrical encryption algorithm is proposed that prevents the outside attacks. The new algorithm avoids key exchange between users and reduces the time taken for the encryption and decryption. It operates at high data rate in comparison with The Data Encryption Standard (DES, Triple DES (TDES, Advanced Encryption Standard (AES-256, and RC6 algorithms. The new algorithm is applied successfully on both text file and voice message.

  9. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    Science.gov (United States)

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  10. Localization from near-source quasi-static electromagnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, John Compton [Univ. of Southern California, Los Angeles, CA (United States)

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.

  11. GWO-LPWSN: Grey Wolf Optimization Algorithm for Node Localization Problem in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    R. Rajakumar

    2017-01-01

    Full Text Available Seyedali Mirjalili et al. (2014 introduced a completely unique metaheuristic technique particularly grey wolf optimization (GWO. This algorithm mimics the social behavior of grey wolves whereas it follows the leadership hierarchy and attacking strategy. The rising issue in wireless sensor network (WSN is localization problem. The objective of this problem is to search out the geographical position of unknown nodes with the help of anchor nodes in WSN. In this work, GWO algorithm is incorporated to spot the correct position of unknown nodes, so as to handle the node localization problem. The proposed work is implemented using MATLAB 8.2 whereas nodes are deployed in a random location within the desired network area. The parameters like computation time, percentage of localized node, and minimum localization error measures are utilized to analyse the potency of GWO rule with other variants of metaheuristics algorithms such as particle swarm optimization (PSO and modified bat algorithm (MBA. The observed results convey that the GWO provides promising results compared to the PSO and MBA in terms of the quick convergence rate and success rate.

  12. Trilateration-based localization algorithm for ADS-B radar systems

    Science.gov (United States)

    Huang, Ming-Shih

    Rapidly increasing growth and demand in various unmanned aerial vehicles (UAV) have pushed governmental regulation development and numerous technology research advances toward integrating unmanned and manned aircraft into the same civil airspace. Safety of other airspace users is the primary concern; thus, with the introduction of UAV into the National Airspace System (NAS), a key issue to overcome is the risk of a collision with manned aircraft. The challenge of UAV integration is global. As automatic dependent surveillance-broadcast (ADS-B) system has gained wide acceptance, additional exploitations of the radioed satellite-based information are topics of current interest. One such opportunity includes the augmentation of the communication ADS-B signal with a random bi-phase modulation for concurrent use as a radar signal for detecting other aircraft in the vicinity. This dissertation provides detailed discussion about the ADS-B radar system, as well as the formulation and analysis of a suitable non-cooperative multi-target tracking method for the ADS-B radar system using radar ranging techniques and particle filter algorithms. In order to deal with specific challenges faced by the ADS-B radar system, several estimation algorithms are studied. Trilateration-based localization algorithms are proposed due to their easy implementation and their ability to work with coherent signal sources. The centroid of three most closely spaced intersections of constant-range loci is conventionally used as trilateration estimate without rigorous justification. In this dissertation, we address the quality of trilateration intersections through range scaling factors. A number of well-known triangle centers, including centroid, incenter, Lemoine point (LP), and Fermat point (FP), are discussed in detail. To the author's best knowledge, LP was never associated with trilateration techniques. According our study, LP is proposed as the best trilateration estimator thanks to the

  13. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    Science.gov (United States)

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  14. Local anesthesia selection algorithm in patients with concomitant somatic diseases.

    Science.gov (United States)

    Anisimova, E N; Sokhov, S T; Letunova, N Y; Orekhova, I V; Gromovik, M V; Erilin, E A; Ryazantsev, N A

    2016-01-01

    The paper presents basic principles of local anesthesia selection in patients with concomitant somatic diseases. These principles are history taking; analysis of drugs interaction with local anesthetic and sedation agents; determination of the functional status of the patient; patient anxiety correction; dental care with monitoring of hemodynamics parameters. It was found that adhering to this algorithm promotes prevention of urgent conditions in patients in outpatient dentistry.

  15. A novel algorithm for automatic localization of human eyes

    Institute of Scientific and Technical Information of China (English)

    Liang Tao (陶亮); Juanjuan Gu (顾涓涓); Zhenquan Zhuang (庄镇泉)

    2003-01-01

    Based on geometrical facial features and image segmentation, we present a novel algorithm for automatic localization of human eyes in grayscale or color still images with complex background. Firstly, a determination criterion of eye location is established by the prior knowledge of geometrical facial features. Secondly,a range of threshold values that would separate eye blocks from others in a segmented face image (I.e.,a binary image) are estimated. Thirdly, with the progressive increase of the threshold by an appropriate step in that range, once two eye blocks appear from the segmented image, they will be detected by the determination criterion of eye location. Finally, the 2D correlation coefficient is used as a symmetry similarity measure to check the factuality of the two detected eyes. To avoid the background interference, skin color segmentation can be applied in order to enhance the accuracy of eye detection. The experimental results demonstrate the high efficiency of the algorithm and correct localization rate.

  16. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-05-01

    Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.

  17. High-frequency asymptotics of the local vertex function. Algorithmic implementations

    Energy Technology Data Exchange (ETDEWEB)

    Tagliavini, Agnese; Wentzell, Nils [Institut fuer Theoretische Physik, Eberhard Karls Universitaet, 72076 Tuebingen (Germany); Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Li, Gang; Rohringer, Georg; Held, Karsten; Toschi, Alessandro [Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Taranto, Ciro [Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Max Planck Institute for Solid State Research, D-70569 Stuttgart (Germany); Andergassen, Sabine [Institut fuer Theoretische Physik, Eberhard Karls Universitaet, 72076 Tuebingen (Germany)

    2016-07-01

    Local vertex functions are a crucial ingredient of several forefront many-body algorithms in condensed matter physics. However, the full treatment of their frequency dependence poses a huge limitation to the numerical performance. A significant advancement requires an efficient treatment of the high-frequency asymptotic behavior of the vertex functions. We here provide a detailed diagrammatic analysis of the high-frequency asymptotic structures and their physical interpretation. Based on these insights, we propose a frequency parametrization, which captures the whole high-frequency asymptotics for arbitrary values of the local Coulomb interaction and electronic density. We present its algorithmic implementation in many-body solvers based on parquet-equations as well as functional renormalization group schemes and assess its validity by comparing our results for the single impurity Anderson model with exact diagonalization calculations.

  18. An inverse source location algorithm for radiation portal monitor applications

    International Nuclear Information System (INIS)

    Miller, Karen A.; Charlton, William S.

    2010-01-01

    Radiation portal monitors are being deployed at border crossings throughout the world to prevent the smuggling of nuclear and radiological materials; however, a tension exists between security and the free-flow of commerce. Delays at ports-of-entry have major economic implications, so it is imperative to minimize portal monitor screening time. We have developed an algorithm to locate a radioactive source using a distributed array of detectors, specifically for use at border crossings. To locate the source, we formulated an optimization problem where the objective function describes the least-squares difference between the actual and predicted detector measurements. The predicted measurements are calculated by solving the 3-D deterministic neutron transport equation given an estimated source position. The source position is updated using the steepest descent method, where the gradient of the objective function with respect to the source position is calculated using adjoint transport calculations. If the objective function is smaller than the convergence criterion, then the source position has been identified. This paper presents the derivation of the underlying equations in the algorithm as well as several computational test cases used to characterize its accuracy.

  19. A simple algorithm for estimation of source-to-detector distance in Compton imaging

    International Nuclear Information System (INIS)

    Rawool-Sullivan, Mohini W.; Sullivan, John P.; Tornga, Shawn R.; Brumby, Steven P.

    2008-01-01

    Compton imaging is used to predict the location of gamma-emitting radiation sources. The X and Y coordinates of the source can be obtained using a back-projected image and a two-dimensional peak-finding algorithm. The emphasis of this work is to estimate the source-to-detector distance (Z). The algorithm presented uses the solid angle subtended by the reconstructed image at various source-to-detector distances. This algorithm was validated using both measured data from the prototype Compton imager (PCI) constructed at the Los Alamos National Laboratory and simulated data of the same imager. Results show this method can be applied successfully to estimate Z, and it provides a way of determining Z without prior knowledge of the source location. This method is faster than the methods that employ maximum likelihood method because it is based on simple back projections of Compton scatter data

  20. A review of feature detection and match algorithms for localization and mapping

    Science.gov (United States)

    Li, Shimiao

    2017-09-01

    Localization and mapping is an essential ability of a robot to keep track of its own location in an unknown environment. Among existing methods for this purpose, vision-based methods are more effective solutions for being accurate, inexpensive and versatile. Vision-based methods can generally be categorized as feature-based approaches and appearance-based approaches. The feature-based approaches prove higher performance in textured scenarios. However, their performance depend highly on the applied feature-detection algorithms. In this paper, we surveyed algorithms for feature detection, which is an essential step in achieving vision-based localization and mapping. In this pater, we present mathematical models of the algorithms one after another. To compare the performances of the algorithms, we conducted a series of experiments on their accuracy, speed, scale invariance and rotation invariance. The results of the experiments showed that ORB is the fastest algorithm in detecting and matching features, the speed of which is more than 10 times that of SURF and approximately 40 times that of SIFT. And SIFT, although with no advantage in terms of speed, shows the most correct matching pairs and proves its accuracy.

  1. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm.

    Science.gov (United States)

    Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A; Ravankar, Abhijeet

    2018-04-23

    In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  2. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm

    Directory of Open Access Journals (Sweden)

    Yun-Ting Wang

    2018-04-01

    Full Text Available In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  3. A comparison of optimization algorithms for localized in vivo B0 shimming.

    Science.gov (United States)

    Nassirpour, Sahar; Chang, Paul; Fillmer, Ariane; Henning, Anke

    2018-02-01

    To compare several different optimization algorithms currently used for localized in vivo B 0 shimming, and to introduce a novel, fast, and robust constrained regularized algorithm (ConsTru) for this purpose. Ten different optimization algorithms (including samples from both generic and dedicated least-squares solvers, and a novel constrained regularized inversion method) were implemented and compared for shimming in five different shimming volumes on 66 in vivo data sets from both 7 T and 9.4 T. The best algorithm was chosen to perform single-voxel spectroscopy at 9.4 T in the frontal cortex of the brain on 10 volunteers. The results of the performance tests proved that the shimming algorithm is prone to unstable solutions if it depends on the value of a starting point, and is not regularized to handle ill-conditioned problems. The ConsTru algorithm proved to be the most robust, fast, and efficient algorithm among all of the chosen algorithms. It enabled acquisition of spectra of reproducible high quality in the frontal cortex at 9.4 T. For localized in vivo B 0 shimming, the use of a dedicated linear least-squares solver instead of a generic nonlinear one is highly recommended. Among all of the linear solvers, the constrained regularized method (ConsTru) was found to be both fast and most robust. Magn Reson Med 79:1145-1156, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. CAMPAIGN: an open-source library of GPU-accelerated data clustering algorithms.

    Science.gov (United States)

    Kohlhoff, Kai J; Sosnick, Marc H; Hsu, William T; Pande, Vijay S; Altman, Russ B

    2011-08-15

    Data clustering techniques are an essential component of a good data analysis toolbox. Many current bioinformatics applications are inherently compute-intense and work with very large datasets. Sequential algorithms are inadequate for providing the necessary performance. For this reason, we have created Clustering Algorithms for Massively Parallel Architectures, Including GPU Nodes (CAMPAIGN), a central resource for data clustering algorithms and tools that are implemented specifically for execution on massively parallel processing architectures. CAMPAIGN is a library of data clustering algorithms and tools, written in 'C for CUDA' for Nvidia GPUs. The library provides up to two orders of magnitude speed-up over respective CPU-based clustering algorithms and is intended as an open-source resource. New modules from the community will be accepted into the library and the layout of it is such that it can easily be extended to promising future platforms such as OpenCL. Releases of the CAMPAIGN library are freely available for download under the LGPL from https://simtk.org/home/campaign. Source code can also be obtained through anonymous subversion access as described on https://simtk.org/scm/?group_id=453. kjk33@cantab.net.

  5. Sound source localization and segregation with internally coupled ears

    DEFF Research Database (Denmark)

    Bee, Mark A; Christensen-Dalsgaard, Jakob

    2016-01-01

    to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla......, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating...

  6. Optimization of source pencil deployment based on plant growth simulation algorithm

    International Nuclear Information System (INIS)

    Yang Lei; Liu Yibao; Liu Yujuan

    2009-01-01

    A plant growth simulation algorithm was proposed for optimizing source pencil deployment for a 60 Co irradiator. A method used to evaluate the calculation results was presented with the objective function defined by relative standard deviation of the exposure rate at the reference points, and the method to transform two kinds of control variables, i.e., position coordinates x j and y j of source pencils in the source plaque, into proper integer variables was also analyzed and solved. The results show that the plant growth simulation algorithm, which possesses both random and directional search mechanism, has good global search ability and can be used conveniently. The results are affected a little by initial conditions, and improve the uniformity in the irradiation fields. It creates a dependable field for the optimization of source bars arrangement at irradiation facility. (authors)

  7. Active damage localization for plate-like structures using wireless sensors and a distributed algorithm

    International Nuclear Information System (INIS)

    Liu, L; Yuan, F G

    2008-01-01

    Wireless structural health monitoring (SHM) systems have emerged as a promising technology for robust and cost-effective structural monitoring. However, the applications of wireless sensors on active diagnosis for structural health monitoring (SHM) have not been extensively investigated. Due to limited energy sources, battery-powered wireless sensors can only perform limited functions and are expected to operate at a low duty cycle. Conventional designs are not suitable for sensing high frequency signals, e.g. in the ultrasonic frequency range. More importantly, algorithms to detect structural damage with a vast amount of data usually require considerable processing and communication time and result in unaffordable power consumption for wireless sensors. In this study, an energy-efficient wireless sensor for supporting high frequency signals and a distributed damage localization algorithm for plate-like structures are proposed, discussed and validated to supplement recent advances made for active sensing-based SHM. First, the power consumption of a wireless sensor is discussed and identified. Then the design of a wireless sensor for active diagnosis using piezoelectric sensors is introduced. The newly developed wireless sensor utilizes an optimized combination of field programmable gate array (FPGA) and conventional microcontroller to address the tradeoff between power consumption and speed requirement. The proposed damage localization algorithm, based on an energy decay model, enables wireless sensors to be practically used in active diagnosis. The power consumption for data communication can be minimized while the power budget for data processing can still be affordable for a battery-powered wireless sensor. The Levenberg–Marquardt method is employed in a mains-powered sensor node or PC to locate damage. Experimental results and discussion on the improvement of power efficiency are given

  8. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    Science.gov (United States)

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  9. Improved semianalytic algorithms for finding the flux from a cylindrical source

    International Nuclear Information System (INIS)

    Wallace, O.J.

    1992-01-01

    Hand-calculation methods involving semianalytic approximations of exact flux formulas continue to be useful in shielding calculations because they enable shield design personnel to make quick estimates of dose rates, check calculations made be more exact and time-consuming methods, and rapidly determine the scope of problems. They are also a valuable teaching tool. The most useful approximate flux formula is that for the flux at a lateral detector point from a cylindrical source with an intervening slab shield. Such an approximate formula is given by Rockwell. An improved formula for this case is given by Ono and Tsuro. Shure and Wallace also give this formula together with function tables and a detailed survey of its accuracy. The second section of this paper provides an algorithm for significantly improving the accuracy of the formula of Ono and Tsuro. The flux at a detector point outside the radial and axial extensions of a cylindrical source, again with an intervening slab shield, is another case of interest, but nowhere in the literature is this arrangement of source, shield, and detector point treated. In the third section of this paper, an algorithm for this case is given, based on superposition of sources and the algorithm of Section II. 6 refs., 1 fig., 1 tab

  10. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  11. A GENETIC ALGORITHM USING THE LOCAL SEARCH HEURISTIC IN FACILITIES LAYOUT PROBLEM: A MEMETİC ALGORİTHM APPROACH

    Directory of Open Access Journals (Sweden)

    Orhan TÜRKBEY

    2002-02-01

    Full Text Available Memetic algorithms, which use local search techniques, are hybrid structured algorithms like genetic algorithms among evolutionary algorithms. In this study, for Quadratic Assignment Problem (QAP, a memetic structured algorithm using a local search heuristic like 2-opt is developed. Developed in the algorithm, a crossover operator that has not been used before for QAP is applied whereas, Eshelman procedure is used in order to increase thesolution variability. The developed memetic algorithm is applied on test problems taken from QAP-LIB, the results are compared with the present techniques in the literature.

  12. Constrained VPH+: a local path planning algorithm for a bio-inspired crawling robot with customized ultrasonic scanning sensor.

    Science.gov (United States)

    Rao, Akshay; Elara, Mohan Rajesh; Elangovan, Karthikeyan

    This paper aims to develop a local path planning algorithm for a bio-inspired, reconfigurable crawling robot. A detailed description of the robotic platform is first provided, and the suitability for deployment of each of the current state-of-the-art local path planners is analyzed after an extensive literature review. The Enhanced Vector Polar Histogram algorithm is described and reformulated to better fit the requirements of the platform. The algorithm is deployed on the robotic platform in crawling configuration and favorably compared with other state-of-the-art local path planning algorithms.

  13. Fault Diagnosis of Supervision and Homogenization Distance Based on Local Linear Embedding Algorithm

    Directory of Open Access Journals (Sweden)

    Guangbin Wang

    2015-01-01

    Full Text Available In view of the problems of uneven distribution of reality fault samples and dimension reduction effect of locally linear embedding (LLE algorithm which is easily affected by neighboring points, an improved local linear embedding algorithm of homogenization distance (HLLE is developed. The method makes the overall distribution of sample points tend to be homogenization and reduces the influence of neighboring points using homogenization distance instead of the traditional Euclidean distance. It is helpful to choose effective neighboring points to construct weight matrix for dimension reduction. Because the fault recognition performance improvement of HLLE is limited and unstable, the paper further proposes a new local linear embedding algorithm of supervision and homogenization distance (SHLLE by adding the supervised learning mechanism. On the basis of homogenization distance, supervised learning increases the category information of sample points so that the same category of sample points will be gathered and the heterogeneous category of sample points will be scattered. It effectively improves the performance of fault diagnosis and maintains stability at the same time. A comparison of the methods mentioned above was made by simulation experiment with rotor system fault diagnosis, and the results show that SHLLE algorithm has superior fault recognition performance.

  14. Parareal algorithms with local time-integrators for time fractional differential equations

    Science.gov (United States)

    Wu, Shu-Lin; Zhou, Tao

    2018-04-01

    It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.

  15. Random noise suppression of seismic data using non-local Bayes algorithm

    Science.gov (United States)

    Chang, De-Kuan; Yang, Wu-Yang; Wang, Yi-Hui; Yang, Qing; Wei, Xin-Jian; Feng, Xiao-Ying

    2018-02-01

    For random noise suppression of seismic data, we present a non-local Bayes (NL-Bayes) filtering algorithm. The NL-Bayes algorithm uses the Gaussian model instead of the weighted average of all similar patches in the NL-means algorithm to reduce the fuzzy of structural details, thereby improving the denoising performance. In the denoising process of seismic data, the size and the number of patches in the Gaussian model are adaptively calculated according to the standard deviation of noise. The NL-Bayes algorithm requires two iterations to complete seismic data denoising, but the second iteration makes use of denoised seismic data from the first iteration to calculate the better mean and covariance of the patch Gaussian model for improving the similarity of patches and achieving the purpose of denoising. Tests with synthetic and real data sets demonstrate that the NL-Bayes algorithm can effectively improve the SNR and preserve the fidelity of seismic data.

  16. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  17. Non-fragile consensus algorithms for a network of diffusion PDEs with boundary local interaction

    Science.gov (United States)

    Xiong, Jun; Li, Junmin

    2017-07-01

    In this study, non-fragile consensus algorithm is proposed to solve the average consensus problem of a network of diffusion PDEs, modelled by boundary controlled heat equations. The problem deals with the case where the Neumann-type boundary controllers are corrupted by additive persistent disturbances. To achieve consensus between agents, a linear local interaction rule addressing this requirement is given. The proposed local interaction rules are analysed by applying a Lyapunov-based approach. The multiplicative and additive non-fragile feedback control algorithms are designed and sufficient conditions for the consensus of the multi-agent systems are presented in terms of linear matrix inequalities, respectively. Simulation results are presented to support the effectiveness of the proposed algorithms.

  18. A Sustainable City Planning Algorithm Based on TLBO and Local Search

    Science.gov (United States)

    Zhang, Ke; Lin, Li; Huang, Xuanxuan; Liu, Yiming; Zhang, Yonggang

    2017-09-01

    Nowadays, how to design a city with more sustainable features has become a center problem in the field of social development, meanwhile it has provided a broad stage for the application of artificial intelligence theories and methods. Because the design of sustainable city is essentially a constraint optimization problem, the swarm intelligence algorithm of extensive research has become a natural candidate for solving the problem. TLBO (Teaching-Learning-Based Optimization) algorithm is a new swarm intelligence algorithm. Its inspiration comes from the “teaching” and “learning” behavior of teaching class in the life. The evolution of the population is realized by simulating the “teaching” of the teacher and the student “learning” from each other, with features of less parameters, efficient, simple thinking, easy to achieve and so on. It has been successfully applied to scheduling, planning, configuration and other fields, which achieved a good effect and has been paid more and more attention by artificial intelligence researchers. Based on the classical TLBO algorithm, we propose a TLBO_LS algorithm combined with local search. We design and implement the random generation algorithm and evaluation model of urban planning problem. The experiments on the small and medium-sized random generation problem showed that our proposed algorithm has obvious advantages over DE algorithm and classical TLBO algorithm in terms of convergence speed and solution quality.

  19. DOA Estimation of Multiple LFM Sources Using a STFT-based and FBSS-based MUSIC Algorithm

    Directory of Open Access Journals (Sweden)

    K. B. Cui

    2017-12-01

    Full Text Available Direction of arrival (DOA estimation is an important problem in array signal processing. An effective multiple signal classification (MUSIC method based on the short-time Fourier transform (STFT and forward/ backward spatial smoothing (FBSS techniques for the DOA estimation problem of multiple time-frequency (t-f joint LFM sources is addressed. Previous work in the area e. g. STFT-MUSIC algorithm cannot resolve the t-f completely or largely joint sources because they can only select the single-source t-f points. The proposed method con¬structs the spatial t-f distributions (STFDs by selecting the multiple-source t-f points and uses the FBSS techniques to solve the problem of rank loss. In this way, the STFT-FBSS-MUSIC algorithm can resolve the t-f largely joint or completely joint LFM sources. In addition, the proposed algorithm also owns pretty low computational complexity when resolving multiple LFM sources because it can reduce the times of the feature decomposition and spectrum search. The performance of the proposed method is compared with that of the existing t-f based MUSIC algorithms through computer simulations and the results show its good performance.

  20. A Mobile Anchor Assisted Localization Algorithm Based on Regular Hexagon in Wireless Sensor Networks

    Science.gov (United States)

    Rodrigues, Joel J. P. C.

    2014-01-01

    Localization is one of the key technologies in wireless sensor networks (WSNs), since it provides fundamental support for many location-aware protocols and applications. Constraints of cost and power consumption make it infeasible to equip each sensor node in the network with a global position system (GPS) unit, especially for large-scale WSNs. A promising method to localize unknown nodes is to use several mobile anchors which are equipped with GPS units moving among unknown nodes and periodically broadcasting their current locations to help nearby unknown nodes with localization. This paper proposes a mobile anchor assisted localization algorithm based on regular hexagon (MAALRH) in two-dimensional WSNs, which can cover the whole monitoring area with a boundary compensation method. Unknown nodes calculate their positions by using trilateration. We compare the MAALRH with HILBERT, CIRCLES, and S-CURVES algorithms in terms of localization ratio, localization accuracy, and path length. Simulations show that the MAALRH can achieve high localization ratio and localization accuracy when the communication range is not smaller than the trajectory resolution. PMID:25133212

  1. Improved radiological/nuclear source localization in variable NORM background: An MLEM approach with segmentation data

    Energy Technology Data Exchange (ETDEWEB)

    Penny, Robert D., E-mail: robert.d.penny@leidos.com [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Crowley, Tanya M.; Gardner, Barbara M.; Mandell, Myron J.; Guo, Yanlin; Haas, Eric B.; Knize, Duane J.; Kuharski, Robert A.; Ranta, Dale; Shyffer, Ryan [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Labov, Simon; Nelson, Karl; Seilhan, Brandon [Lawrence Livermore National Laboratory, Livermore, CA (United States); Valentine, John D. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2015-06-01

    A novel approach and algorithm have been developed to rapidly detect and localize both moving and static radiological/nuclear (R/N) sources from an airborne platform. Current aerial systems with radiological sensors are limited in their ability to compensate for variable naturally occurring radioactive material (NORM) background. The proposed approach suppresses the effects of NORM background by incorporating additional information to segment the survey area into regions over which the background is likely to be uniform. The method produces pixelated Source Activity Maps (SAMs) of both target and background radionuclide activity over the survey area. The task of producing the SAMs requires (1) the development of a forward model which describes the transformation of radionuclide activity to detector measurements and (2) the solution of the associated inverse problem. The inverse problem is ill-posed as there are typically fewer measurements than unknowns. In addition the measurements are subject to Poisson statistical noise. The Maximum-Likelihood Expectation-Maximization (MLEM) algorithm is used to solve the inverse problem as it is well suited for under-determined problems corrupted by Poisson noise. A priori terrain information is incorporated to segment the reconstruction space into regions within which we constrain NORM background activity to be uniform. Descriptions of the algorithm and examples of performance with and without segmentation on simulated data are presented.

  2. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  3. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  4. A Local Search Algorithm for the Flow Shop Scheduling Problem with Release Dates

    Directory of Open Access Journals (Sweden)

    Tao Ren

    2015-01-01

    Full Text Available This paper discusses the flow shop scheduling problem to minimize the makespan with release dates. By resequencing the jobs, a modified heuristic algorithm is obtained for handling large-sized problems. Moreover, based on some properties, a local search scheme is provided to improve the heuristic to gain high-quality solution for moderate-sized problems. A sequence-independent lower bound is presented to evaluate the performance of the algorithms. A series of simulation results demonstrate the effectiveness of the proposed algorithms.

  5. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  6. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering.

    Science.gov (United States)

    Luo, Junhai; Fu, Liang

    2017-06-09

    With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  7. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering

    Directory of Open Access Journals (Sweden)

    Junhai Luo

    2017-06-01

    Full Text Available With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS, which is collected from Access Points (APs. The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  8. Mathematical model and algorithm of operation scheduling for monitoring situation in local waters

    Directory of Open Access Journals (Sweden)

    Sokolov Boris

    2017-01-01

    Full Text Available A multiple-model approach to description and investigation of control processes in regional maritime security system is presented. The processes considered in this paper were qualified as control processes of computing operations providing monitoring of the situation adding in the local water area and connected to relocation of different ships classes (further the active mobile objects (AMO. Previously developed concept of active moving object (AMO is used. The models describe operation of AMO automated monitoring and control system (AMCS elements as well as their interaction with objects-in-service that are sources or recipients of information being processed. The unified description of various control processes allows synthesizing simultaneously both technical and functional structures of AMO AMCS. The algorithm for solving the scheduling problem is described in terms of the classical theory of optimal automatic control.

  9. A local adaptive algorithm for emerging scale-free hierarchical networks

    International Nuclear Information System (INIS)

    Gomez Portillo, I J; Gleiser, P M

    2010-01-01

    In this work we study a growing network model with chaotic dynamical units that evolves using a local adaptive rewiring algorithm. Using numerical simulations we show that the model allows for the emergence of hierarchical networks. First, we show that the networks that emerge with the algorithm present a wide degree distribution that can be fitted by a power law function, and thus are scale-free networks. Using the LaNet-vi visualization tool we present a graphical representation that reveals a central core formed only by hubs, and also show the presence of a preferential attachment mechanism. In order to present a quantitative analysis of the hierarchical structure we analyze the clustering coefficient. In particular, we show that as the network grows the clustering becomes independent of system size, and also presents a power law decay as a function of the degree. Finally, we compare our results with a similar version of the model that has continuous non-linear phase oscillators as dynamical units. The results show that local interactions play a fundamental role in the emergence of hierarchical networks.

  10. A Local Scalable Distributed EM Algorithm for Large P2P Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  11. Escaping "localisms" in IT sourcing

    DEFF Research Database (Denmark)

    Mola, L.; Carugati, Andrea

    2012-01-01

    Organizations are limited in their choices by the institutional environment in which they operate. This is particularly true for IT sourcing decisions that go beyond cost considerations and are constrained by traditions, geographical location, and social networks. This article investigates how......, organizations can strike a balance between the different institutional logics guiding IT sourcing decisions and eventually shift from the dominant logic of localism to a logic of market efficiency. This change does not depend from a choice but rather builds on a process through which IT management competences...

  12. Automated phase picker and source location algorithm for local distances using a single three component seismic station

    International Nuclear Information System (INIS)

    Saari, J.

    1989-12-01

    The paper describes procedures for automatic location of local events by using single-site, three-component (3c) seismogram records. Epicentral distance is determined from the time difference between P- and S-onsets. For onset time estimates a special phase picker algorithm is introduced. Onset detection is accomplished by comparing short-term average with long-term average after multiplication of north, east and vertical components of recording. For epicentral distances up to 100 km, errors seldom exceed 5 km. The slowness vector, essentially the azimuth, is estimated independently by using the Christoffersson et al. (1988) 'polarization' technique, although a priori knowledge of the P-onset time gives the best results. Differences between 'true' and observed azimuths are generally less than 12 deg C. Practical examples are given by demonstrating the viability of the procedures for automated 3c seismogram analysis. The results obtained compare favourably with those achieved by a miniarray of three stations. (orig.)

  13. A Localization Method for Underwater Wireless Sensor Networks Based on Mobility Prediction and Particle Swarm Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2016-02-01

    Full Text Available Due to their special environment, Underwater Wireless Sensor Networks (UWSNs are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field.

  14. A Localization Method for Underwater Wireless Sensor Networks Based on Mobility Prediction and Particle Swarm Optimization Algorithms.

    Science.gov (United States)

    Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei

    2016-02-06

    Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object's mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field.

  15. Open-source chemogenomic data-driven algorithms for predicting drug-target interactions.

    Science.gov (United States)

    Hao, Ming; Bryant, Stephen H; Wang, Yanli

    2018-02-06

    While novel technologies such as high-throughput screening have advanced together with significant investment by pharmaceutical companies during the past decades, the success rate for drug development has not yet been improved prompting researchers looking for new strategies of drug discovery. Drug repositioning is a potential approach to solve this dilemma. However, experimental identification and validation of potential drug targets encoded by the human genome is both costly and time-consuming. Therefore, effective computational approaches have been proposed to facilitate drug repositioning, which have proved to be successful in drug discovery. Doubtlessly, the availability of open-accessible data from basic chemical biology research and the success of human genome sequencing are crucial to develop effective in silico drug repositioning methods allowing the identification of potential targets for existing drugs. In this work, we review several chemogenomic data-driven computational algorithms with source codes publicly accessible for predicting drug-target interactions (DTIs). We organize these algorithms by model properties and model evolutionary relationships. We re-implemented five representative algorithms in R programming language, and compared these algorithms by means of mean percentile ranking, a new recall-based evaluation metric in the DTI prediction research field. We anticipate that this review will be objective and helpful to researchers who would like to further improve existing algorithms or need to choose appropriate algorithms to infer potential DTIs in the projects. The source codes for DTI predictions are available at: https://github.com/minghao2016/chemogenomicAlg4DTIpred. Published by Oxford University Press 2018. This work is written by US Government employees and is in the public domain in the US.

  16. Gas-leak localization using distributed ultrasonic sensors

    Science.gov (United States)

    Huseynov, Javid; Baliga, Shankar; Dillencourt, Michael; Bic, Lubomir; Bagherzadeh, Nader

    2009-03-01

    We propose an ultrasonic gas leak localization system based on a distributed network of sensors. The system deploys highly sensitive miniature Micro-Electro-Mechanical Systems (MEMS) microphones and uses a suite of energy-decay (ED) and time-delay of arrival (TDOA) algorithms for localizing a source of a gas leak. Statistical tools such as the maximum likelihood (ML) and the least squares (LS) estimators are used for approximating the source location when closed-form solutions fail in the presence of ambient background nuisance and inherent electronic noise. The proposed localization algorithms were implemented and tested using a Java-based simulation platform connected to four or more distributed MEMS microphones observing a broadband nitrogen leak from an orifice. The performance of centralized and decentralized algorithms under ED and TDOA schemes is analyzed and compared in terms of communication overhead and accuracy in presence of additive white Gaussian noise (AWGN).

  17. Algorithms for the process management of sealed source brachytherapy

    International Nuclear Information System (INIS)

    Engler, M.J.; Ulin, K.; Sternick, E.S.

    1996-01-01

    Incidents and misadministrations suggest that brachytherapy may benefit form clarification of the quality management program and other mandates of the US Nuclear Regulatory Commission. To that end, flowcharts of step by step subprocesses were developed and formatted with dedicated software. The overall process was similarly organized in a complex flowchart termed a general process map. Procedural and structural indicators associated with each flowchart and map were critiqued and pre-existing documentation was revised. open-quotes Step-regulation tablesclose quotes were created to refer steps and subprocesses to Nuclear Regulatory Commission rules and recommendations in their sequences of applicability. Brachytherapy algorithms were specified as programmable, recursive processes, including therapeutic dose determination and monitoring doses to the public. These algorithms are embodied in flowcharts and step-regulation tables. A general algorithm is suggested as a template form which other facilities may derive tools to facilitate process management of sealed source brachytherapy. 11 refs., 9 figs., 2 tabs

  18. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    Science.gov (United States)

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be

  19. Localization of Vibrating Noise Sources in Nuclear Reactor Cores

    International Nuclear Information System (INIS)

    Hultqvist, Pontus

    2004-09-01

    In this thesis the possibility of locating vibrating noise sources in a nuclear reactor core from the neutron noise has been investigated using different localization methods. The influence of the vibrating noise source has been considered to be a small perturbation of the neutron flux inside the reactor. Linear perturbation theory has been used to construct the theoretical framework upon which the localization methods are based. Two different cases have been considered: one where a one-dimensional one-group model has been used and another where a two-dimensional two-energy group noise simulator has been used. In the first case only one localization method is able to determine the position with good accuracy. This localization method is based on finding roots of an equation and is sensitive to other perturbations of the neutron flux. It will therefore work better with the assistance of approximative methods that reconstruct the noise source to determine if the results are reliable or not. In the two-dimensional case the results are more promising. There are several different localization techniques that reproduce both the vibrating noise source position and the direction of vibration with enough precision. The approximate methods that reconstruct the noise source are substantially better and are able to support the root finding method in a more constructive way. By combining the methods, the results will be more reliable

  20. Characterization of dynamic changes of current source localization based on spatiotemporal fMRI constrained EEG source imaging

    Science.gov (United States)

    Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun

    2018-06-01

    Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be

  1. SCALCE: boosting sequence compression algorithms using locally consistent encoding.

    Science.gov (United States)

    Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk

    2012-12-01

    The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip

  2. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  3. Extraction of Tantalum from locally sourced Tantalite using ...

    African Journals Online (AJOL)

    acer

    Extraction of Tantalum from locally sourced Tantalite using ... ABSTRACT: The ability of polyethylene glycol solution to extract tantalum from locally .... metal ion in question by the particular extractant. ... Loparite, a rare-earth ore (Ce, Na,.

  4. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    Science.gov (United States)

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  5. Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shuai Li

    2008-03-01

    Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  6. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    International Nuclear Information System (INIS)

    Poynee, L A

    2003-01-01

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation

  7. A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  8. Research on fully distributed optical fiber sensing security system localization algorithm

    Science.gov (United States)

    Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen

    2013-12-01

    A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.

  9. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  10. Tracking of Multiple Moving Sources Using Recursive EM Algorithm

    Directory of Open Access Journals (Sweden)

    Böhme Johann F

    2005-01-01

    Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.

  11. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆

    Science.gov (United States)

    López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874

  12. Local fractional variational iteration algorithm iii for the diffusion model associated with non-differentiable heat transfer

    Directory of Open Access Journals (Sweden)

    Meng Zhi-Jun

    2016-01-01

    Full Text Available This paper addresses a new application of the local fractional variational iteration algorithm III to solve the local fractional diffusion equation defined on Cantor sets associated with non-differentiable heat transfer.

  13. A Combinatorial Benders’ Cuts Algorithm for the Local Container Drayage Problem

    Directory of Open Access Journals (Sweden)

    Zhaojie Xue

    2015-01-01

    Full Text Available This paper examines the local container drayage problem under a special operation mode in which tractors and trailers can be separated; that is, tractors can be assigned to a new task at another location while trailers with containers are waiting for packing or unpacking. Meanwhile, the strategy of sharing empty containers between different customers is also considered to improve the efficiency and lower the operation cost. The problem is formulated as a vehicle routing and scheduling problem with temporal constraints. We adopt combinatorial benders’ cuts algorithm to solve this problem. Numerical experiments are performed on a group of randomly generated instances to test the performance of the proposed algorithm.

  14. A Novel Enhanced Positioning Trilateration Algorithm Implemented for Medical Implant In-Body Localization

    Directory of Open Access Journals (Sweden)

    Peter Brida

    2013-01-01

    Full Text Available Medical implants based on wireless communication will play crucial role in healthcare systems. Some applications need to know the exact position of each implant. RF positioning seems to be an effective approach for implant localization. The two most common positioning data typically used for RF positioning are received signal strength and time of flight of a radio signal between transmitter and receivers (medical implant and network of reference devices with known position. This leads to positioning methods: received signal strength (RSS and time of arrival (ToA. Both methods are based on trilateration. Used positioning data are very important, but the positioning algorithm which estimates the implant position is important as well. In this paper, the proposal of novel algorithm for trilateration is presented. The proposed algorithm improves the quality of basic trilateration algorithms with the same quality of measured positioning data. It is called Enhanced Positioning Trilateration Algorithm (EPTA. The proposed algorithm can be divided into two phases. The first phase is focused on the selection of the most suitable sensors for position estimation. The goal of the second one lies in the positioning accuracy improving by adaptive algorithm. Finally, we provide performance analysis of the proposed algorithm by computer simulations.

  15. Algorithms for biomagnetic source imaging with prior anatomical and physiological information

    Energy Technology Data Exchange (ETDEWEB)

    Hughett, Paul William [Univ. of California, Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

    1995-12-01

    This dissertation derives a new method for estimating current source amplitudes in the brain and heart from external magnetic field measurements and prior knowledge about the probable source positions and amplitudes. The minimum mean square error estimator for the linear inverse problem with statistical prior information was derived and is called the optimal constrained linear inverse method (OCLIM). OCLIM includes as special cases the Shim-Cho weighted pseudoinverse and Wiener estimators but allows more general priors and thus reduces the reconstruction error. Efficient algorithms were developed to compute the OCLIM estimate for instantaneous or time series data. The method was tested in a simulated neuromagnetic imaging problem with five simultaneously active sources on a grid of 387 possible source locations; all five sources were resolved, even though the true sources were not exactly at the modeled source positions and the true source statistics differed from the assumed statistics.

  16. Blind source separation advances in theory, algorithms and applications

    CERN Document Server

    Wang, Wenwu

    2014-01-01

    Blind Source Separation intends to report the new results of the efforts on the study of Blind Source Separation (BSS). The book collects novel research ideas and some training in BSS, independent component analysis (ICA), artificial intelligence and signal processing applications. Furthermore, the research results previously scattered in many journals and conferences worldwide are methodically edited and presented in a unified form. The book is likely to be of interest to university researchers, R&D engineers and graduate students in computer science and electronics who wish to learn the core principles, methods, algorithms, and applications of BSS. Dr. Ganesh R. Naik works at University of Technology, Sydney, Australia; Dr. Wenwu Wang works at University of Surrey, UK.

  17. The algorithms for calculating synthetic seismograms from a dipole source using the derivatives of Green's function

    Science.gov (United States)

    Pavlov, V. M.

    2017-07-01

    The problem of calculating complete synthetic seismograms from a point dipole with an arbitrary seismic moment tensor in a plane parallel medium composed of homogeneous elastic isotropic layers is considered. It is established that the solutions of the system of ordinary differential equations for the motion-stress vector have a reciprocity property, which allows obtaining a compact formula for the derivative of the motion vector with respect to the source depth. The reciprocity theorem for Green's functions with respect to the interchange of the source and receiver is obtained for a medium with cylindrical boundary. The differentiation of Green's functions with respect to the coordinates of the source leads to the same calculation formulas as the algorithm developed in the previous work (Pavlov, 2013). A new algorithm appears when the derivatives with respect to the horizontal coordinates of the source is replaced by the derivatives with respect to the horizontal coordinates of the receiver (with the minus sign). This algorithm is more transparent, compact, and economic than the previous one. It requires calculating the wavenumbers associated with Bessel function's roots of order 0 and order 1, whereas the previous algorithm additionally requires the second order roots.

  18. The Algorithm of Link Prediction on Social Network

    Directory of Open Access Journals (Sweden)

    Liyan Dong

    2013-01-01

    Full Text Available At present, most link prediction algorithms are based on the similarity between two entities. Social network topology information is one of the main sources to design the similarity function between entities. But the existing link prediction algorithms do not apply the network topology information sufficiently. For lack of traditional link prediction algorithms, we propose two improved algorithms: CNGF algorithm based on local information and KatzGF algorithm based on global information network. For the defect of the stationary of social network, we also provide the link prediction algorithm based on nodes multiple attributes information. Finally, we verified these algorithms on DBLP data set, and the experimental results show that the performance of the improved algorithm is superior to that of the traditional link prediction algorithm.

  19. Predicting Post-Translational Modifications from Local Sequence Fragments Using Machine Learning Algorithms: Overview and Best Practices.

    Science.gov (United States)

    Tatjewski, Marcin; Kierczak, Marcin; Plewczynski, Dariusz

    2017-01-01

    Here, we present two perspectives on the task of predicting post translational modifications (PTMs) from local sequence fragments using machine learning algorithms. The first is the description of the fundamental steps required to construct a PTM predictor from the very beginning. These steps include data gathering, feature extraction, or machine-learning classifier selection. The second part of our work contains the detailed discussion of more advanced problems which are encountered in PTM prediction task. Probably the most challenging issues which we have covered here are: (1) how to address the training data class imbalance problem (we also present statistics describing the problem); (2) how to properly set up cross-validation folds with an approach which takes into account the homology of protein data records, to address this problem we present our folds-over-clusters algorithm; and (3) how to efficiently reach for new sources of learning features. Presented techniques and notes resulted from intense studies in the field, performed by our and other groups, and can be useful both for researchers beginning in the field of PTM prediction and for those who want to extend the repertoire of their research techniques.

  20. Local search for Steiner tree problems in graphs

    NARCIS (Netherlands)

    Verhoeven, M.G.A.; Severens, M.E.M.; Aarts, E.H.L.; Rayward-Smith, V.J.; Reeves, C.R.; Smith, G.D.

    1996-01-01

    We present a local search algorithm for the Steiner tree problem in graphs, which uses a neighbourhood in which paths in a steiner tree are exchanged. The exchange function of this neigbourhood is based on multiple-source shortest path algorithm. We present computational results for a known

  1. Sub-OBB based object recognition and localization algorithm using range images

    International Nuclear Information System (INIS)

    Hoang, Dinh-Cuong; Chen, Liang-Chia; Nguyen, Thanh-Hung

    2017-01-01

    This paper presents a novel approach to recognize and estimate pose of the 3D objects in cluttered range images. The key technical breakthrough of the developed approach can enable robust object recognition and localization under undesirable condition such as environmental illumination variation as well as optical occlusion to viewing the object partially. First, the acquired point clouds are segmented into individual object point clouds based on the developed 3D object segmentation for randomly stacked objects. Second, an efficient shape-matching algorithm called Sub-OBB based object recognition by using the proposed oriented bounding box (OBB) regional area-based descriptor is performed to reliably recognize the object. Then, the 3D position and orientation of the object can be roughly estimated by aligning the OBB of segmented object point cloud with OBB of matched point cloud in a database generated from CAD model and 3D virtual camera. To detect accurate pose of the object, the iterative closest point (ICP) algorithm is used to match the object model with the segmented point clouds. From the feasibility test of several scenarios, the developed approach is verified to be feasible for object pose recognition and localization. (paper)

  2. Robust Floor Determination Algorithm for Indoor Wireless Localization Systems under Reference Node Failure

    Directory of Open Access Journals (Sweden)

    Kriangkrai Maneerat

    2016-01-01

    Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.

  3. A Localization Algorithm Based on AOA for Ad-Hoc Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yang Sun Lee

    2012-01-01

    Full Text Available Knowledge of positions of sensor nodes in Wireless Sensor Networks (WSNs will make possible many applications such as asset monitoring, object tracking and routing. In WSNs, the errors may happen in the measurement of distances and angles between pairs of nodes in WSN and these errors will be propagated to different nodes, the estimation of positions of sensor nodes can be difficult and have huge errors. In this paper, we will propose localization algorithm based on both distance and angle to landmark. So, we introduce a method of incident angle to landmark and the algorithm to exchange physical data such as distances and incident angles and update the position of a node by utilizing multiple landmarks and multiple paths to landmarks.

  4. Vector-Sensor MUSIC for Polarized Seismic Sources Localization

    Directory of Open Access Journals (Sweden)

    Jérôme I. Mars

    2005-01-01

    Full Text Available This paper addresses the problem of high-resolution polarized source detection and introduces a new eigenstructure-based algorithm that yields direction of arrival (DOA and polarization estimates using a vector-sensor (or multicomponent-sensor array. This method is based on separation of the observation space into signal and noise subspaces using fourth-order tensor decomposition. In geophysics, in particular for reservoir acquisition and monitoring, a set of Nx-multicomponent sensors is laid on the ground with constant distance Δx between them. Such a data acquisition scheme has intrinsically three modes: time, distance, and components. The proposed method needs multilinear algebra in order to preserve data structure and avoid reorganization. The data is thus stored in tridimensional arrays rather than matrices. Higher-order eigenvalue decomposition (HOEVD for fourth-order tensors is considered to achieve subspaces estimation and to compute the eigenelements. We propose a tensorial version of the MUSIC algorithm for a vector-sensor array allowing a joint estimation of DOA and signal polarization estimation. Performances of the proposed algorithm are evaluated.

  5. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    Science.gov (United States)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  6. "Closing the Loop": Overcoming barriers to locally sourcing food in Fort Collins, Colorado

    Science.gov (United States)

    DeMets, C. M.

    2012-12-01

    Environmental sustainability has become a focal point for many communities in recent years, and restaurants are seeking creative ways to become more sustainable. As many chefs realize, sourcing food locally is an important step towards sustainability and towards building a healthy, resilient community. Review of literature on sustainability in restaurants and the local food movement revealed that chefs face many barriers to sourcing their food locally, but that there are also many solutions for overcoming these barriers that chefs are in the early stages of exploring. Therefore, the purpose of this research is to identify barriers to local sourcing and investigate how some restaurants are working to overcome those barriers in the city of Fort Collins, Colorado. To do this, interviews were conducted with four subjects who guide purchasing decisions for restaurants in Fort Collins. Two of these restaurants have created successful solutions and are able to source most of their food locally. The other two are interested in and working towards sourcing locally but have not yet been able to overcome barriers, and therefore only source a few local items. Findings show that there are four barriers and nine solutions commonly identified by each of the subjects. The research found differences between those who source most of their food locally and those who have not made as much progress in local sourcing. Based on these results, two solution flowcharts were created, one for primary barriers and one for secondary barriers, for restaurants to assess where they are in the local food chain and how they can more successfully source food locally. As there are few explicit connections between this research question and climate change, it is important to consider the implicit connections that motivate and justify this research. The question of whether or not greenhouse gas emissions are lower for locally sourced food is a topic of much debate, and while there are major developments

  7. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.

    Science.gov (United States)

    López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.

  8. Hitting times of local and global optima in genetic algorithms with very high selection pressure

    Directory of Open Access Journals (Sweden)

    Eremeev Anton V.

    2017-01-01

    Full Text Available The paper is devoted to upper bounds on the expected first hitting times of the sets of local or global optima for non-elitist genetic algorithms with very high selection pressure. The results of this paper extend the range of situations where the upper bounds on the expected runtime are known for genetic algorithms and apply, in particular, to the Canonical Genetic Algorithm. The obtained bounds do not require the probability of fitness-decreasing mutation to be bounded by a constant which is less than one.

  9. On the Impact of Localization and Density Control Algorithms in Target Tracking Applications for Wireless Sensor Networks

    Science.gov (United States)

    Campos, Andre N.; Souza, Efren L.; Nakamura, Fabiola G.; Nakamura, Eduardo F.; Rodrigues, Joel J. P. C.

    2012-01-01

    Target tracking is an important application of wireless sensor networks. The networks' ability to locate and track an object is directed linked to the nodes' ability to locate themselves. Consequently, localization systems are essential for target tracking applications. In addition, sensor networks are often deployed in remote or hostile environments. Therefore, density control algorithms are used to increase network lifetime while maintaining its sensing capabilities. In this work, we analyze the impact of localization algorithms (RPE and DPE) and density control algorithms (GAF, A3 and OGDC) on target tracking applications. We adapt the density control algorithms to address the k-coverage problem. In addition, we analyze the impact of network density, residual integration with density control, and k-coverage on both target tracking accuracy and network lifetime. Our results show that DPE is a better choice for target tracking applications than RPE. Moreover, among the evaluated density control algorithms, OGDC is the best option among the three. Although the choice of the density control algorithm has little impact on the tracking precision, OGDC outperforms GAF and A3 in terms of tracking time. PMID:22969329

  10. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  11. Localization of gravitational wave sources with networks of advanced detectors

    International Nuclear Information System (INIS)

    Klimenko, S.; Mitselmakher, G.; Pankow, C.; Vedovato, G.; Drago, M.; Prodi, G.; Mazzolo, G.; Salemi, F.; Re, V.; Yakushin, I.

    2011-01-01

    Coincident observations with gravitational wave (GW) detectors and other astronomical instruments are among the main objectives of the experiments with the network of LIGO, Virgo, and GEO detectors. They will become a necessary part of the future GW astronomy as the next generation of advanced detectors comes online. The success of such joint observations directly depends on the source localization capabilities of the GW detectors. In this paper we present studies of the sky localization of transient GW sources with the future advanced detector networks and describe their fundamental properties. By reconstructing sky coordinates of ad hoc signals injected into simulated detector noise, we study the accuracy of the source localization and its dependence on the strength of injected signals, waveforms, and network configurations.

  12. A Modified Load Flow Algorithm in Power Systems with Alternative Energy Sources

    International Nuclear Information System (INIS)

    Contreras, D.L.; Cañedo, J.M.

    2017-01-01

    In this paper an algorithm for calculating the steady state of electrical networks including wind and photovoltaic generation is presented. The wind generators considered are; asynchronous (squirrel cage and doubly fed) and synchronous generators using permanent magnets. The proposed algorithm is based on the formulation of nodal power injections that is solved with the modified Newton Raphson technique in its polar formulation using complex matrices notation. Each power injection of wind and photovoltaic generators is calculated independently in each iteration according to its particular mathematical model, which is generally non-linear. Results are presented with a 30-node test system. The computation time of the proposed algorithm is compared with the conventional methodology to include alternative energy sources in power flows studies. (author)

  13. Distributed, signal strength-based indoor localization algorithm for use in healthcare environments.

    Science.gov (United States)

    Wyffels, Jeroen; De Brabanter, Jos; Crombez, Pieter; Verhoeve, Piet; Nauwelaers, Bart; De Strycker, Lieven

    2014-11-01

    In current healthcare environments, a trend toward mobile and personalized interactions between people and nurse call systems is strongly noticeable. Therefore, it should be possible to locate patients at all times and in all places throughout the care facility. This paper aims at describing a method by which a mobile node can locate itself indoors, based on signal strength measurements and a minimal amount of yes/no decisions. The algorithm has been developed specifically for use in a healthcare environment. With extensive testing and statistical support, we prove that our algorithm can be used in a healthcare setting with an envisioned level of localization accuracy up to room revel (or region level in a corridor), while avoiding heavy investments since the hardware of an existing nurse call network can be reused. The approach opted for leads to very high scalability, since thousands of mobile nodes can locate themselves. Network timing issues and localization update delays are avoided, which ensures that a patient can receive the needed care in a time and resources efficient way.

  14. New Advanced Source Identification Algorithm (ASIA-NEW) for radiation monitors with plastic detectors

    Energy Technology Data Exchange (ETDEWEB)

    Stavrov, Andrei; Yamamoto, Eugene [Rapiscan Systems, Inc., 14000 Mead Street, Longmont, CO, 80504 (United States)

    2015-07-01

    Radiation Portal Monitors (RPM) with plastic detectors represent the main instruments used for primary border (customs) radiation control. RPM are widely used because they are simple, reliable, relatively inexpensive and have a high sensitivity. However, experience using the RPM in various countries has revealed the systems have some grave shortcomings. There is a dramatic decrease of the probability of detection of radioactive sources under high suppression of the natural gamma background (radiation control of heavy cargoes, containers and, especially, trains). NORM (Naturally Occurring Radioactive Material) existing in objects under control trigger the so-called 'nuisance alarms', requiring a secondary inspection for source verification. At a number of sites, the rate of such alarms is so high it significantly complicates the work of customs and border officers. This paper presents a brief description of new variant of algorithm ASIA-New (New Advanced Source Identification Algorithm), which was developed by the authors and based on some experimental test results. It also demonstrates results of different tests and the capability of a new system to overcome the shortcomings stated above. New electronics and ASIA-New enables RPM to detect radioactive sources under a high background suppression (tested at 15-30%) and to verify the detected NORM (KCl) and the artificial isotopes (Co-57, Ba-133 and other). New variant of ASIA is based on physical principles and does not require a lot of special tests to attain statistical data for its parameters. That is why this system can be easily installed into any RPM with plastic detectors. This algorithm was tested for 1,395 passages of different transports (cars, trucks and trailers) without radioactive sources. It also was tested for 4,015 passages of these transports with radioactive sources of different activity (Co-57, Ba-133, Cs-137, Co-60, Ra-226, Th-232) and these sources masked by NORM (K-40) as well

  15. Magnet sorting algorithms for insertion devices for the Advanced Light Source

    International Nuclear Information System (INIS)

    Humphries, D.; Hoyer, E.; Kincaid, B.; Marks, S.; Schlueter, R.

    1994-01-01

    Insertion devices for the Advanced Light Source (ALS) incorporate up to 3,000 magnet blocks each for pole energization. In order to minimize field errors, these magnets must be measured, sorted and assigned appropriate locations and orientation in the magnetic structures. Sorting must address multiple objectives, including pole excitation and minimization of integrated multipole fields from minor field components in the magnets. This is equivalent to a combinatorial minimization problem with a large configuration space. Multi-stage sorting algorithms use ordering and pairing schemes in conjunction with other combinatorial methods to solve the minimization problem. This paper discusses objective functions, solution algorithms and results of application to magnet block measurement data

  16. R -Dimensional ESPRIT-Type Algorithms for Strictly Second-Order Non-Circular Sources and Their Performance Analysis

    Science.gov (United States)

    Steinwandt, Jens; Roemer, Florian; Haardt, Martin; Galdo, Giovanni Del

    2014-09-01

    High-resolution parameter estimation algorithms designed to exploit the prior knowledge about incident signals from strictly second-order (SO) non-circular (NC) sources allow for a lower estimation error and can resolve twice as many sources. In this paper, we derive the R-D NC Standard ESPRIT and the R-D NC Unitary ESPRIT algorithms that provide a significantly better performance compared to their original versions for arbitrary source signals. They are applicable to shift-invariant R-D antenna arrays and do not require a centrosymmetric array structure. Moreover, we present a first-order asymptotic performance analysis of the proposed algorithms, which is based on the error in the signal subspace estimate arising from the noise perturbation. The derived expressions for the resulting parameter estimation error are explicit in the noise realizations and asymptotic in the effective signal-to-noise ratio (SNR), i.e., the results become exact for either high SNRs or a large sample size. We also provide mean squared error (MSE) expressions, where only the assumptions of a zero mean and finite SO moments of the noise are required, but no assumptions about its statistics are necessary. As a main result, we analytically prove that the asymptotic performance of both R-D NC ESPRIT-type algorithms is identical in the high effective SNR regime. Finally, a case study shows that no improvement from strictly non-circular sources can be achieved in the special case of a single source.

  17. Inaccuracy of Wolff-Parkinson-white accessory pathway localization algorithms in children and patients with congenital heart defects.

    Science.gov (United States)

    Bar-Cohen, Yaniv; Khairy, Paul; Morwood, James; Alexander, Mark E; Cecchin, Frank; Berul, Charles I

    2006-07-01

    ECG algorithms used to localize accessory pathways (AP) in patients with Wolff-Parkinson-White (WPW) syndrome have been validated in adults, but less is known of their use in children, especially in patients with congenital heart disease (CHD). We hypothesize that these algorithms have low diagnostic accuracy in children and even lower in those with CHD. Pre-excited ECGs in 43 patients with WPW and CHD (median age 5.4 years [0.9-32 years]) were evaluated and compared to 43 consecutive WPW control patients without CHD (median age 14.5 years [1.8-18 years]). Two blinded observers predicted AP location using 2 adult and 1 pediatric WPW algorithms, and a third blinded observer served as a tiebreaker. Predicted locations were compared with ablation-verified AP location to identify (a) exact match for AP location and (b) match for laterality (left-sided vs right-sided AP). In control children, adult algorithms were accurate in only 56% and 60%, while the pediatric algorithm was correct in 77%. In 19 patients with Ebstein's anomaly, diagnostic accuracy was similar to controls with at times an even better ability to predict laterality. In non-Ebstein's CHD, however, the algorithms were markedly worse (29% for the adult algorithms and 42% for the pediatric algorithms). A relatively large degree of interobserver variability was seen (kappa values from 0.30 to 0.58). Adult localization algorithms have poor diagnostic accuracy in young patients with and without CHD. Both adult and pediatric algorithms are particularly misleading in non-Ebstein's CHD patients and should be interpreted with caution.

  18. Study of localized photon source in space of measures

    International Nuclear Information System (INIS)

    Lisi, M.

    2010-01-01

    In this paper we study a three-dimensional photon transport problem in an interstellar cloud, with a localized photon source inside. The problem is solved indirectly, by defining the adjoint of an operator acting on an appropriate space of continuous functions. By means of sun-adjoint semi groups theory of operators in a Banach space of regular Borel measures, we prove existence and uniqueness of the solution of the problem. A possible approach to identify the localization of the photon source is finally proposed.

  19. Automatic boiling water reactor control rod pattern design using particle swarm optimization algorithm and local search

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Cheng-Der, E-mail: jdwang@iner.gov.tw [Nuclear Engineering Division, Institute of Nuclear Energy Research, No. 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan, ROC (China); Lin, Chaung [National Tsing Hua University, Department of Engineering and System Science, 101, Section 2, Kuang Fu Road, Hsinchu 30013, Taiwan (China)

    2013-02-15

    Highlights: ► The PSO algorithm was adopted to automatically design a BWR CRP. ► The local search procedure was added to improve the result of PSO algorithm. ► The results show that the obtained CRP is the same good as that in the previous work. -- Abstract: This study developed a method for the automatic design of a boiling water reactor (BWR) control rod pattern (CRP) using the particle swarm optimization (PSO) algorithm. The PSO algorithm is more random compared to the rank-based ant system (RAS) that was used to solve the same BWR CRP design problem in the previous work. In addition, the local search procedure was used to make improvements after PSO, by adding the single control rod (CR) effect. The design goal was to obtain the CRP so that the thermal limits and shutdown margin would satisfy the design requirement and the cycle length, which is implicitly controlled by the axial power distribution, would be acceptable. The results showed that the same acceptable CRP found in the previous work could be obtained.

  20. 3D source localization of interictal spikes in epilepsy patients with MRI lesions

    Science.gov (United States)

    Ding, Lei; Worrell, Gregory A.; Lagerlund, Terrence D.; He, Bin

    2006-08-01

    The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R2 values achieved by FINE than MUSIC

  1. 3D source localization of interictal spikes in epilepsy patients with MRI lesions

    International Nuclear Information System (INIS)

    Ding Lei; Worrell, Gregory A; Lagerlund, Terrence D; He Bin

    2006-01-01

    The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R 2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R 2 values achieved by FINE than MUSIC

  2. Multiobjective optimization of the synchrotron radiation source 'Siberia-2' lattice using a genetic algorithm

    International Nuclear Information System (INIS)

    Korchuganov, V.N.; Smygacheva, A.S.; Fomin, E.A.

    2018-01-01

    One of the best ways to design, research and optimize accelerators and synchrotron radiation sources is to use numerical simulation. Nevertheless, very often during complex physical process simulation considering many nonlinear effects the use of classical optimization methods is difficult. The article deals with the application of multiobjective optimization using genetic algorithms for accelerators and light sources design. These algorithms allow both simple linear and complex nonlinear lattices to be efficiently optimized when obtaining the required facility parameters.

  3. Local structure information by EXAFS analysis using two algorithms for Fourier transform calculation

    International Nuclear Information System (INIS)

    Aldea, N; Pintea, S; Rednic, V; Matei, F; Hu Tiandou; Xie Yaning

    2009-01-01

    The present work is a comparison study between different algorithms of Fourier transform for obtaining very accurate local structure results using Extended X-ray Absorption Fine Structure technique. In this paper we focus on the local structural characteristics of supported nickel catalysts and Fe 3 O 4 core-shell nanocomposites. The radial distribution function could be efficiently calculated by the fast Fourier transform when the coordination shells are well separated while the Filon quadrature gave remarkable results for close-shell coordination.

  4. Study of Hybrid Localization Noncooperative Scheme in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Irfan Dwiguna Sumitra

    2017-01-01

    Full Text Available In this paper, we evaluated the experiment and analysis measurement accuracy to determine object location based on wireless sensor network (WSN. The algorithm estimates the position of sensor nodes employing received signal strength (RSS from scattered nodes in the environment, in particular for the indoor building. Besides that, we considered another algorithm based on weight centroid localization (WCL. In particular testbed, we combined both RSS and WCL as hybrid localization in case of noncooperative scheme with considering that source nodes directly communicate only with anchor nodes. Our experimental result shows localization accuracy of more than 90% and obtained the estimation error reduction to 4% compared to existing algorithms.

  5. A Robust and Efficient Algorithm for Tool Recognition and Localization for Space Station Robot

    Directory of Open Access Journals (Sweden)

    Lingbo Cheng

    2014-12-01

    Full Text Available This paper studies a robust target recognition and localization method for a maintenance robot in a space station, and its main goal is to solve the target affine transformation caused by microgravity and the strong reflection and refraction of sunlight and lamplight in the cabin, as well as the occlusion of other objects. In this method, an Affine Scale Invariant Feature Transform (Affine-SIFT algorithm is proposed to extract enough local feature points with a fully affine invariant, and the stable matching point is obtained from the above point for target recognition by the selected Random Sample Consensus (RANSAC algorithm. Then, in order to localize the target, the effective and appropriate 3D grasping scope of the target is defined, and we determine and evaluate the grasping precision with the estimated affine transformation parameters presented in this paper. Finally, the threshold of RANSAC is optimized to enhance the accuracy and efficiency of target recognition and localization, and the scopes of illumination, vision distance and viewpoint angle for robot are evaluated to obtain effective image data by Root-Mean-Square Error (RMSE. An experimental system to simulate the illumination environment in a space station is established. Enough experiments have been carried out, and the experimental results show both the validity of the proposed definition of the grasping scope and the feasibility of the proposed recognition and localization method.

  6. Underwater Broadband Source Localization Based on Modal Filtering and Features Extraction

    Directory of Open Access Journals (Sweden)

    Dominique Fattaccioli

    2010-01-01

    Full Text Available Passive source localization is a crucial issue in underwater acoustics. In this paper, we focus on shallow water environment (0 to 400 m and broadband Ultra-Low Frequency acoustic sources (1 to 100 Hz. In this configuration and at a long range, the acoustic propagation can be described by normal mode theory. The propagating signal breaks up into a series of depth-dependent modes. These modes carry information about the source position. Mode excitation factors and mode phases analysis allow, respectively, localization in depth and distance. We propose two different approaches to achieve the localization: multidimensional approach (using a horizontal array of hydrophones based on frequency-wavenumber transform (F-K method and monodimensional approach (using a single hydrophone based on adapted spectral representation (FTa method. For both approaches, we propose first complete tools for modal filtering, and then depth and distance estimators. We show that adding mode sign and source spectrum informations improves considerably the localization performance in depth. The reference acoustic field needed for depth localization is simulated with the new realistic propagation modelMoctesuma. The feasibility of both approaches, F-K and FTa, are validated on data simulated in shallow water for different configurations. The performance of localization, in depth and distance, is very satisfactory.

  7. Underwater Broadband Source Localization Based on Modal Filtering and Features Extraction

    Directory of Open Access Journals (Sweden)

    Cristol Xavier

    2010-01-01

    Full Text Available Passive source localization is a crucial issue in underwater acoustics. In this paper, we focus on shallow water environment (0 to 400 m and broadband Ultra-Low Frequency acoustic sources (1 to 100 Hz. In this configuration and at a long range, the acoustic propagation can be described by normal mode theory. The propagating signal breaks up into a series of depth-dependent modes. These modes carry information about the source position. Mode excitation factors and mode phases analysis allow, respectively, localization in depth and distance. We propose two different approaches to achieve the localization: multidimensional approach (using a horizontal array of hydrophones based on frequency-wavenumber transform ( method and monodimensional approach (using a single hydrophone based on adapted spectral representation ( method. For both approaches, we propose first complete tools for modal filtering, and then depth and distance estimators. We show that adding mode sign and source spectrum informations improves considerably the localization performance in depth. The reference acoustic field needed for depth localization is simulated with the new realistic propagation modelMoctesuma. The feasibility of both approaches, and , are validated on data simulated in shallow water for different configurations. The performance of localization, in depth and distance, is very satisfactory.

  8. Search and localization of orphan sources

    International Nuclear Information System (INIS)

    Gayral, J.-P.

    2001-01-01

    The control of all radioactive materials should be a major and permanent concern of every state. This paper outlines some of the steps which should be taken in order to detect and localize orphan sources. Two of them are of great importance to any state wishing to resolve the orphan source problem. The first one is to analyse the situation and the second is to establish a strategy before taking action. It is the responsibility of the state to work on the first step; but for the second one it can draw on the advice of the IAEA specialists with experience grained from a variety of situations

  9. A Local and Global Search Combined Particle Swarm Optimization Algorithm and Its Convergence Analysis

    Directory of Open Access Journals (Sweden)

    Weitian Lin

    2014-01-01

    Full Text Available Particle swarm optimization algorithm (PSOA is an advantage optimization tool. However, it has a tendency to get stuck in a near optimal solution especially for middle and large size problems and it is difficult to improve solution accuracy by fine-tuning parameters. According to the insufficiency, this paper researches the local and global search combine particle swarm algorithm (LGSCPSOA, and its convergence and obtains its convergence qualification. At the same time, it is tested with a set of 8 benchmark continuous functions and compared their optimization results with original particle swarm algorithm (OPSOA. Experimental results indicate that the LGSCPSOA improves the search performance especially on the middle and large size benchmark functions significantly.

  10. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.; Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  11. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.

    2017-08-22

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  12. Locating hazardous gas leaks in the atmosphere via modified genetic, MCMC and particle swarm optimization algorithms

    Science.gov (United States)

    Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming

    2017-05-01

    Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.

  13. Semi-automated algorithm for localization of dermal/epidermal junction in reflectance confocal microscopy images of human skin

    Science.gov (United States)

    Kurugol, Sila; Dy, Jennifer G.; Rajadhyaksha, Milind; Gossage, Kirk W.; Weissmann, Jesse; Brooks, Dana H.

    2011-03-01

    The examination of the dermis/epidermis junction (DEJ) is clinically important for skin cancer diagnosis. Reflectance confocal microscopy (RCM) is an emerging tool for detection of skin cancers in vivo. However, visual localization of the DEJ in RCM images, with high accuracy and repeatability, is challenging, especially in fair skin, due to low contrast, heterogeneous structure and high inter- and intra-subject variability. We recently proposed a semi-automated algorithm to localize the DEJ in z-stacks of RCM images of fair skin, based on feature segmentation and classification. Here we extend the algorithm to dark skin. The extended algorithm first decides the skin type and then applies the appropriate DEJ localization method. In dark skin, strong backscatter from the pigment melanin causes the basal cells above the DEJ to appear with high contrast. To locate those high contrast regions, the algorithm operates on small tiles (regions) and finds the peaks of the smoothed average intensity depth profile of each tile. However, for some tiles, due to heterogeneity, multiple peaks in the depth profile exist and the strongest peak might not be the basal layer peak. To select the correct peak, basal cells are represented with a vector of texture features. The peak with most similar features to this feature vector is selected. The results show that the algorithm detected the skin types correctly for all 17 stacks tested (8 fair, 9 dark). The DEJ detection algorithm achieved an average distance from the ground truth DEJ surface of around 4.7μm for dark skin and around 7-14μm for fair skin.

  14. Chemical Source Localization Fusing Concentration Information in the Presence of Chemical Background Noise.

    Science.gov (United States)

    Pomareda, Víctor; Magrans, Rudys; Jiménez-Soto, Juan M; Martínez, Dani; Tresánchez, Marcel; Burgués, Javier; Palacín, Jordi; Marco, Santiago

    2017-04-20

    We present the estimation of a likelihood map for the location of the source of a chemical plume dispersed under atmospheric turbulence under uniform wind conditions. The main contribution of this work is to extend previous proposals based on Bayesian inference with binary detections to the use of concentration information while at the same time being robust against the presence of background chemical noise. For that, the algorithm builds a background model with robust statistics measurements to assess the posterior probability that a given chemical concentration reading comes from the background or from a source emitting at a distance with a specific release rate. In addition, our algorithm allows multiple mobile gas sensors to be used. Ten realistic simulations and ten real data experiments are used for evaluation purposes. For the simulations, we have supposed that sensors are mounted on cars which do not have among its main tasks navigating toward the source. To collect the real dataset, a special arena with induced wind is built, and an autonomous vehicle equipped with several sensors, including a photo ionization detector (PID) for sensing chemical concentration, is used. Simulation results show that our algorithm, provides a better estimation of the source location even for a low background level that benefits the performance of binary version. The improvement is clear for the synthetic data while for real data the estimation is only slightly better, probably because our exploration arena is not able to provide uniform wind conditions. Finally, an estimation of the computational cost of the algorithmic proposal is presented.

  15. A localized navigation algorithm for radiation evasion for nuclear facilities: Optimizing the “Radiation Evasion” criterion: Part I

    International Nuclear Information System (INIS)

    Khasawneh, Mohammed A.; Al-Shboul, Zeina Aman M.; Jaradat, Mohammad A.

    2013-01-01

    Highlights: ► A new navigation algorithm for radiation evasion around nuclear facilities. ► An optimization criteria minimized under algorithm operation. ► A man-borne device guiding the occupational worker towards paths that warrant least radiation × time products. ► Benefits of using localized navigation as opposed to global navigation schemas. ► A path discrimination function for finding the navigational paths exhibiting the least amounts of radiation. -- Abstract: In this paper, we introduce a navigation algorithm having general utility for occupational workers at nuclear facilities and places where radiation poses serious health hazards. This novel algorithm leverages the use of localized information for its operation. Therefore, the need for central processing and decision resources is avoided, since information processing and the ensuing decision-making are done aboard a man-borne device. To acquire the information needed for path planning in radiation avoidance, a well-designed and distributed wireless sensory infrastructure is needed. This will automatically benefit from the most recent trends in technology developments in both sensor networks and wireless communication. When used to navigate based on local radiation information, the algorithm will behave more reliably when accidents happen, since no long-haul communication links are required for information exchange. In essence, the proposed algorithm is designed to leverage nearest neighbor information coming in through the sensory network overhead, to compute successful navigational paths from one point to another. The proposed algorithm is tested under the “Radiation Evasion” criterion. It is also tested for the case when more information, beyond nearest neighbors, is made available; here, we test its operation for different numbers of step look-ahead. We verify algorithm performance by means of simulations, whereby navigational paths are calculated for different radiation fields

  16. A localized navigation algorithm for radiation evasion for nuclear facilities: Optimizing the “Radiation Evasion” criterion: Part I

    Energy Technology Data Exchange (ETDEWEB)

    Khasawneh, Mohammed A., E-mail: mkha@ieee.org [Department of Electrical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan); Al-Shboul, Zeina Aman M., E-mail: xeinaaman@gmail.com [Department of Electrical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan); Jaradat, Mohammad A., E-mail: majaradat@just.edu.jo [Department of Mechanical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan)

    2013-06-15

    Highlights: ► A new navigation algorithm for radiation evasion around nuclear facilities. ► An optimization criteria minimized under algorithm operation. ► A man-borne device guiding the occupational worker towards paths that warrant least radiation × time products. ► Benefits of using localized navigation as opposed to global navigation schemas. ► A path discrimination function for finding the navigational paths exhibiting the least amounts of radiation. -- Abstract: In this paper, we introduce a navigation algorithm having general utility for occupational workers at nuclear facilities and places where radiation poses serious health hazards. This novel algorithm leverages the use of localized information for its operation. Therefore, the need for central processing and decision resources is avoided, since information processing and the ensuing decision-making are done aboard a man-borne device. To acquire the information needed for path planning in radiation avoidance, a well-designed and distributed wireless sensory infrastructure is needed. This will automatically benefit from the most recent trends in technology developments in both sensor networks and wireless communication. When used to navigate based on local radiation information, the algorithm will behave more reliably when accidents happen, since no long-haul communication links are required for information exchange. In essence, the proposed algorithm is designed to leverage nearest neighbor information coming in through the sensory network overhead, to compute successful navigational paths from one point to another. The proposed algorithm is tested under the “Radiation Evasion” criterion. It is also tested for the case when more information, beyond nearest neighbors, is made available; here, we test its operation for different numbers of step look-ahead. We verify algorithm performance by means of simulations, whereby navigational paths are calculated for different radiation fields.

  17. Experimental validation of a distributed algorithm for dynamic spectrum access in local area networks

    DEFF Research Database (Denmark)

    Tonelli, Oscar; Berardinelli, Gilberto; Tavares, Fernando Menezes Leitão

    2013-01-01

    Next generation wireless networks aim at a significant improvement of the spectral efficiency in order to meet the dramatic increase in data service demand. In local area scenarios user-deployed base stations are expected to take place, thus making the centralized planning of frequency resources...... activities with the Autonomous Component Carrier Selection (ACCS) algorithm, a distributed solution for interference management among small neighboring cells. A preliminary evaluation of the algorithm performance is provided considering its live execution on a software defined radio network testbed...

  18. Energy Efficient Routing Algorithms in Dynamic Optical Core Networks with Dual Energy Sources

    DEFF Research Database (Denmark)

    Wang, Jiayuan; Fagertun, Anna Manolova; Ruepp, Sarah Renée

    2013-01-01

    This paper proposes new energy efficient routing algorithms in optical core networks, with the application of solar energy sources and bundled links. A comprehensive solar energy model is described in the proposed network scenarios. Network performance in energy savings, connection blocking...... probability, resource utilization and bundled link usage are evaluated with dynamic network simulations. Results show that algorithms proposed aiming for reducing the dynamic part of the energy consumption of the network may raise the fixed part of the energy consumption meanwhile....

  19. Monte Carlo algorithms with absorbing Markov chains: Fast local algorithms for slow dynamics

    International Nuclear Information System (INIS)

    Novotny, M.A.

    1995-01-01

    A class of Monte Carlo algorithms which incorporate absorbing Markov chains is presented. In a particular limit, the lowest order of these algorithms reduces to the n-fold way algorithm. These algorithms are applied to study the escape from the metastable state in the two-dimensional square-lattice nearest-neighbor Ising ferromagnet in an unfavorable applied field, and the agreement with theoretical predictions is very good. It is demonstrated that the higher-order algorithms can be many orders of magnitude faster than either the traditional Monte Carlo or n-fold way algorithms

  20. MEG source localization using invariance of noise space.

    Directory of Open Access Journals (Sweden)

    Junpeng Zhang

    Full Text Available We propose INvariance of Noise (INN space as a novel method for source localization of magnetoencephalography (MEG data. The method is based on the fact that modulations of source strengths across time change the energy in signal subspace but leave the noise subspace invariant. We compare INN with classical MUSIC, RAP-MUSIC, and beamformer approaches using simulated data while varying signal-to-noise ratios as well as distance and temporal correlation between two sources. We also demonstrate the utility of INN with actual auditory evoked MEG responses in eight subjects. In all cases, INN performed well, especially when the sources were closely spaced, highly correlated, or one source was considerably stronger than the other.

  1. Astrometric and Timing Effects of Gravitational Waves from Localized Sources

    OpenAIRE

    Kopeikin, Sergei M.; Schafer, Gerhard; Gwinn, Carl R.; Eubanks, T. Marshall

    1998-01-01

    A consistent approach for an exhaustive solution of the problem of propagation of light rays in the field of gravitational waves emitted by a localized source of gravitational radiation is developed in the first post-Minkowskian and quadrupole approximation of General Relativity. We demonstrate that the equations of light propagation in the retarded gravitational field of an arbitrary localized source emitting quadrupolar gravitational waves can be integrated exactly. The influence of the gra...

  2. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    Science.gov (United States)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to

  3. Smoothed Analysis of Local Search Algorithms

    NARCIS (Netherlands)

    Manthey, Bodo; Dehne, Frank; Sack, Jörg-Rüdiger; Stege, Ulrike

    2015-01-01

    Smoothed analysis is a method for analyzing the performance of algorithms for which classical worst-case analysis fails to explain the performance observed in practice. Smoothed analysis has been applied to explain the performance of a variety of algorithms in the last years. One particular class of

  4. Error-source effects on the performance of direct and iterative algorithms on an optical matrix-vector processor

    Science.gov (United States)

    Perlee, Caroline J.; Casasent, David P.

    1990-09-01

    Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.

  5. A Local Asynchronous Distributed Privacy Preserving Feature Selection Algorithm for Large Peer-to-Peer Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper we develop a local distributed privacy preserving algorithm for feature selection in a large peer-to-peer environment. Feature selection is often used...

  6. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  7. Interpretation of the MEG-MUSIC scan in biomagnetic source localization

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J.C.; Lewis, P.S. [Los Alamos National Lab., NM (United States); Leahy, R.M. [University of Southern California, Los Angeles, CA (United States). Signal and Image Processing Inst.

    1993-09-01

    MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak at unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.

  8. High-precision approach to localization scheme of visible light communication based on artificial neural networks and modified genetic algorithms

    Science.gov (United States)

    Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Chen, Hao; Cai, Ye; Chen, Yingcong

    2017-10-01

    An indoor positioning algorithm based on visible light communication (VLC) is presented. This algorithm is used to calculate a three-dimensional (3-D) coordinate of an indoor optical wireless environment, which includes sufficient orders of multipath reflections from reflecting surfaces of the room. Leveraging the global optimization ability of the genetic algorithm (GA), an innovative framework for 3-D position estimation based on a modified genetic algorithm is proposed. Unlike other techniques using VLC for positioning, the proposed system can achieve indoor 3-D localization without making assumptions about the height or acquiring the orientation angle of the mobile terminal. Simulation results show that an average localization error of less than 1.02 cm can be achieved. In addition, in most VLC-positioning systems, the effect of reflection is always neglected and its performance is limited by reflection, which makes the results not so accurate for a real scenario and the positioning errors at the corners are relatively larger than other places. So, we take the first-order reflection into consideration and use artificial neural network to match the model of a nonlinear channel. The studies show that under the nonlinear matching of direct and reflected channels the average positioning errors of four corners decrease from 11.94 to 0.95 cm. The employed algorithm is emerged as an effective and practical method for indoor localization and outperform other existing indoor wireless localization approaches.

  9. Neighbor Discovery Algorithm in Wireless Local Area Networks Using Multi-beam Directional Antennas

    Science.gov (United States)

    Wang, Jin; Peng, Wei; Liu, Song

    2017-10-01

    Neighbor discovery is an important step for Wireless Local Area Networks (WLAN) and the use of multi-beam directional antennas can greatly improve the network performance. However, most neighbor discovery algorithms in WLAN, based on multi-beam directional antennas, can only work effectively in synchronous system but not in asynchro-nous system. And collisions at AP remain a bottleneck for neighbor discovery. In this paper, we propose two asynchrono-us neighbor discovery algorithms: asynchronous hierarchical scanning (AHS) and asynchronous directional scanning (ADS) algorithm. Both of them are based on three-way handshaking mechanism. AHS and ADS reduce collisions at AP to have a good performance in a hierarchical way and directional way respectively. In the end, the performance of the AHS and ADS are tested on OMNeT++. Moreover, it is analyzed that different application scenarios and the factors how to affect the performance of these algorithms. The simulation results show that AHS is suitable for the densely populated scenes around AP while ADS is suitable for that most of the neighborhood nodes are far from AP.

  10. A closed-form solution for moving source localization using LBI changing rate of phase difference only

    Directory of Open Access Journals (Sweden)

    Zhang Min

    2014-04-01

    Full Text Available Due to the deficiencies in the conventional multiple-receiver localization systems based on direction of arrival (DOA such as system complexity of interferometer or array and amplitude/phase unbalance between multiple receiving channels and constraint on antenna configuration, a new radiated source localization method using the changing rate of phase difference (CRPD measured by a long baseline interferometer (LBI only is studied. To solve the strictly nonlinear problem, a two-stage closed-form solution is proposed. In the first stage, the DOA and its changing rate are estimated from the CRPD of each observer by the pseudolinear least square (PLS method, and then in the second stage, the source position and velocity are found by another PLS minimization. The bias of the algorithm caused by the correlation between the measurement matrix and the noise in the second stage is analyzed. To reduce this bias, an instrumental variable (IV method is derived. A weighted IV estimator is given in order to reduce the estimation variance. The proposed method does not need any initial guess and the computation is small. The Cramer–Rao lower bound (CRLB and mean square error (MSE are also analyzed. Simulation results show that the proposed method can be close to the CRLB with moderate Gaussian measurement noise.

  11. Detecting Large-Scale Brain Networks Using EEG: Impact of Electrode Density, Head Modeling and Source Localization

    Science.gov (United States)

    Liu, Quanying; Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante

    2018-01-01

    Resting state networks (RSNs) in the human brain were recently detected using high-density electroencephalography (hdEEG). This was done by using an advanced analysis workflow to estimate neural signals in the cortex and to assess functional connectivity (FC) between distant cortical regions. FC analyses were conducted either using temporal (tICA) or spatial independent component analysis (sICA). Notably, EEG-RSNs obtained with sICA were very similar to RSNs retrieved with sICA from functional magnetic resonance imaging data. It still remains to be clarified, however, what technological aspects of hdEEG acquisition and analysis primarily influence this correspondence. Here we examined to what extent the detection of EEG-RSN maps by sICA depends on the electrode density, the accuracy of the head model, and the source localization algorithm employed. Our analyses revealed that the collection of EEG data using a high-density montage is crucial for RSN detection by sICA, but also the use of appropriate methods for head modeling and source localization have a substantial effect on RSN reconstruction. Overall, our results confirm the potential of hdEEG for mapping the functional architecture of the human brain, and highlight at the same time the interplay between acquisition technology and innovative solutions in data analysis. PMID:29551969

  12. Acoustic source localization : Exploring theory and practice

    NARCIS (Netherlands)

    Wind, Jelmer

    2009-01-01

    Over the past few decades, noise pollution became an important issue in modern society. This has led to an increased effort in the industry to reduce noise. Acoustic source localization methods determine the location and strength of the vibrations which are the cause of sound based onmeasurements of

  13. 基于MUSIC-Group Delay算法的相邻相干信号源定位%Closely spaced coherent-source localization based on MUSIC-group delay algorithm

    Institute of Scientific and Technical Information of China (English)

    郑家芝

    2016-01-01

    为了准确的进行相邻的相干信号源定位,提出了一种基于多重信号分类群延迟(MUSIC-group delay)的改进算法。首先,将空间平滑技术引入到波达方向(DoA)估计当中去除部分相干信号。由于在信号源相邻的情况下子空间算法的性能降低,就结合了 MUSIC-Group Delay算法来区分相邻的信号源,这种方法因为自身的加和性通过 MUSIC 相位谱来计算群延迟函数,从而能估计出相邻的信号源。理论分析和仿真结果表明提出的方法估计相邻的相干信号源比子空间算法更精确,分辨率更高。%In this paper,the closely spaced coherent-source localization is considered,and an improved method based on the group delay of Multiple Signal Classification (MUSIC)is presented.Firstly,we introduce the spatial smoothing technique into direction of arrival (DoA)estimation to get rid of the coherent part of signals.Due to the degraded per-formance of sub-space based methods on the condition of nearby sources,we then utilize the MUSIC-Group Delay algo-rithm to distinguish the closely spaced sources,which can resolve spatially close sources by the use of the group delay function computed from the MUSIC phase spectrum for efficient DoA estimation owing to its spatial additive property. Theoretical analysis and simulation results demonstrate that the proposed approach can estimate the DoA of the coherent close signal sources more precisely and have higher resolution compared with sub-space based methods.

  14. Impact localization in dispersive waveguides based on energy-attenuation of waves with the traveled distance

    Science.gov (United States)

    Alajlouni, Sa'ed; Albakri, Mohammad; Tarazaga, Pablo

    2018-05-01

    An algorithm is introduced to solve the general multilateration (source localization) problem in a dispersive waveguide. The algorithm is designed with the intention of localizing impact forces in a dispersive floor, and can potentially be used to localize and track occupants in a building using vibration sensors connected to the lower surface of the walking floor. The lower the wave frequencies generated by the impact force, the more accurate the localization is expected to be. An impact force acting on a floor, generates a seismic wave that gets distorted as it travels away from the source. This distortion is noticeable even over relatively short traveled distances, and is mainly caused by the dispersion phenomenon among other reasons, therefore using conventional localization/multilateration methods will produce localization error values that are highly variable and occasionally large. The proposed localization approach is based on the fact that the wave's energy, calculated over some time window, decays exponentially as the wave travels away from the source. Although localization methods that assume exponential decay exist in the literature (in the field of wireless communications), these methods have only been considered for wave propagation in non-dispersive media, in addition to the limiting assumption required by these methods that the source must not coincide with a sensor location. As a result, these methods cannot be applied to the indoor localization problem in their current form. We show how our proposed method is different from the other methods, and that it overcomes the source-sensor location coincidence limitation. Theoretical analysis and experimental data will be used to motivate and justify the pursuit of the proposed approach for localization in a dispersive medium. Additionally, hammer impacts on an instrumented floor section inside an operational building, as well as finite element model simulations, are used to evaluate the performance of

  15. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    Science.gov (United States)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  16. A Fast Map Merging Algorithm in the Field of Multirobot SLAM

    Directory of Open Access Journals (Sweden)

    Yanli Liu

    2013-01-01

    Full Text Available In recent years, the research on single-robot simultaneous localization and mapping (SLAM has made a great success. However, multirobot SLAM faces many challenging problems, including unknown robot poses, unshared map, and unstable communication. In this paper, a map merging algorithm based on virtual robot motion is proposed for multi-robot SLAM. The thinning algorithm is used to construct the skeleton of the grid map’s empty area, and a mobile robot is simulated in one map. The simulated data is used as information sources in the other map to do partial map Monte Carlo localization; if localization succeeds, the relative pose hypotheses between the two maps can be computed easily. We verify these hypotheses using the rendezvous technique and use them as initial values to optimize the estimation by a heuristic random search algorithm.

  17. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Directory of Open Access Journals (Sweden)

    E. Dall'Asta

    2014-06-01

    Full Text Available Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM, which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  18. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Science.gov (United States)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  19. Localizing gravitational wave sources with single-baseline atom interferometers

    Science.gov (United States)

    Graham, Peter W.; Jung, Sunghoon

    2018-02-01

    Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. We show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization. The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.

  20. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm

    Science.gov (United States)

    Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.

  1. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  2. Autonomous Micro-Air-Vehicle Control Based on Visual Sensing for Odor Source Localization

    Directory of Open Access Journals (Sweden)

    Kenzo Kurotsuchi

    2017-07-01

    Full Text Available In this paper, we propose a novel control method for autonomous-odor-source localization using visual and odor sensing by micro air vehicles (MAVs. Our method is based on biomimetics, which enable highly autonomous localization. Our method does not need any instruction signals, including even global positioning system (GPS signals. An experimenter simply blows a whistle, and the MAV will then start to hover, to seek an odor source, and to keep hovering near the source. The GPS-signal-free control based on visual sense enables indoor/underground use. Moreover, the MAV is light-weight (85 grams and does not cause harm to others even if it accidentally falls. Experiments conducted in the real world were successful in enabling odor source localization using the MAV with a bio-inspired searching method. The distance error of the localization was 63 cm, more accurate than the target distance of 120 cm for individual identification. Our odor source localization is the first step to a proof of concept for a danger warning system. These localization experiments were the first step to a proof of concept for a danger warning system to enable a safer and more secure society.

  3. Hearing aid controlled by binaural source localizer

    NARCIS (Netherlands)

    2009-01-01

    An adaptive directional hearing aid system comprising a left hearing aid and a right hearing aid, wherein a binaural acoustic source localizer is located in the left hearing aid or in the right hearing aid or in a separate body- worn device connected wirelessly to the left hearing aid and the right

  4. Generator localization by current source density (CSD): Implications of volume conduction and field closure at intracranial and scalp resolutions

    Science.gov (United States)

    Tenke, Craig E.; Kayser, Jürgen

    2012-01-01

    The topographic ambiguity and reference-dependency that has plagued EEG/ERP research throughout its history are largely attributable to volume conduction, which may be concisely described by a vector form of Ohm’s Law. This biophysical relationship is common to popular algorithms that infer neuronal generators via inverse solutions. It may be further simplified as Poisson’s source equation, which identifies underlying current generators from estimates of the second spatial derivative of the field potential (Laplacian transformation). Intracranial current source density (CSD) studies have dissected the “cortical dipole” into intracortical sources and sinks, corresponding to physiologically-meaningful patterns of neuronal activity at a sublaminar resolution, much of which is locally cancelled (i.e., closed field). By virtue of the macroscopic scale of the scalp-recorded EEG, a surface Laplacian reflects the radial projections of these underlying currents, representing a unique, unambiguous measure of neuronal activity at scalp. Although the surface Laplacian requires minimal assumptions compared to complex, model-sensitive inverses, the resulting waveform topographies faithfully summarize and simplify essential constraints that must be placed on putative generators of a scalp potential topography, even if they arise from deep or partially-closed fields. CSD methods thereby provide a global empirical and biophysical context for generator localization, spanning scales from intracortical to scalp recordings. PMID:22796039

  5. A New Improved Quantum Evolution Algorithm with Local Search Procedure for Capacitated Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    Ligang Cui

    2013-01-01

    Full Text Available The capacitated vehicle routing problem (CVRP is the most classical vehicle routing problem (VRP; many solution techniques are proposed to find its better answer. In this paper, a new improved quantum evolution algorithm (IQEA with a mixed local search procedure is proposed for solving CVRPs. First, an IQEA with a double chain quantum chromosome, new quantum rotation schemes, and self-adaptive quantum Not gate is constructed to initialize and generate feasible solutions. Then, to further strengthen IQEA's searching ability, three local search procedures 1-1 exchange, 1-0 exchange, and 2-OPT, are adopted. Experiments on a small case have been conducted to analyze the sensitivity of main parameters and compare the performances of the IQEA with different local search strategies. Together with results from the testing of CVRP benchmarks, the superiorities of the proposed algorithm over the PSO, SR-1, and SR-2 have been demonstrated. At last, a profound analysis of the experimental results is presented and some suggestions on future researches are given.

  6. Real breakthrough in detection of radioactive sources by portal monitors with plastic detectors and New Advanced Source Identification Algorithm (ASIA-New)

    Energy Technology Data Exchange (ETDEWEB)

    Stavrov, Andrei; Yamamoto, Eugene [Rapiscan Systems, Inc., 14000 Mead Street, Longmont, CO, 80504 (United States)

    2015-07-01

    Radiation Portal Monitors (RPM) with plastic detectors represent the main instruments used for primary border (customs) radiation control. RPM are widely used because they are simple, reliable, relatively inexpensive and have a high sensitivity. However, experience using the RPM in various countries has revealed the systems have some grave shortcomings. There is a dramatic decrease of the probability of detection of radioactive sources under high suppression of the natural gamma background (radiation control of heavy cargoes, containers and, especially, trains). NORM (Naturally Occurring Radioactive Material) existing in objects under control trigger the so-called 'nuisance alarms', requiring a secondary inspection for source verification. At a number of sites, the rate of such alarms is so high it significantly complicates the work of customs and border officers. This paper presents a brief description of new variant of algorithm ASIA-New (New Advanced Source Identification Algorithm), which was developed by the Rapiscan company. It also demonstrates results of different tests and the capability of a new system to overcome the shortcomings stated above. New electronics and ASIA-New enables RPM to detect radioactive sources under a high background suppression (tested at 15-30%) and to verify the detected NORM (KCl) and the artificial isotopes (Co- 57, Ba-133 and other). New variant of ASIA is based on physical principles, a phenomenological approach and analysis of some important parameter changes during the vehicle passage through the monitor control area. Thanks to this capability main advantage of new system is that this system can be easily installed into any RPM with plastic detectors. Taking into account that more than 4000 RPM has been installed worldwide their upgrading by ASIA-New may significantly increase probability of detection and verification of radioactive sources even masked by NORM. This algorithm was tested for 1,395 passages of

  7. Multi-sources model and control algorithm of an energy management system for light electric vehicles

    International Nuclear Information System (INIS)

    Hannan, M.A.; Azidin, F.A.; Mohamed, A.

    2012-01-01

    Highlights: ► An energy management system (EMS) is developed for a scooter under normal and heavy power load conditions. ► The battery, FC, SC, EMS, DC machine and vehicle dynamics are modeled and designed for the system. ► State-based logic control algorithms provide an efficient and feasible multi-source EMS for light electric vehicles. ► Vehicle’s speed and power are closely matched with the ECE-47 driving cycle under normal and heavy load conditions. ► Sources of energy changeover occurred at 50% of the battery state of charge level in heavy load conditions. - Abstract: This paper presents the multi-sources energy models and ruled based feedback control algorithm of an energy management system (EMS) for light electric vehicle (LEV), i.e., scooters. The multiple sources of energy, such as a battery, fuel cell (FC) and super-capacitor (SC), EMS and power controller, DC machine and vehicle dynamics are designed and modeled using MATLAB/SIMULINK. The developed control strategies continuously support the EMS of the multiple sources of energy for a scooter under normal and heavy power load conditions. The performance of the proposed system is analyzed and compared with that of the ECE-47 test drive cycle in terms of vehicle speed and load power. The results show that the designed vehicle’s speed and load power closely match those of the ECE-47 test driving cycle under normal and heavy load conditions. This study’s results suggest that the proposed control algorithm provides an efficient and feasible EMS for LEV.

  8. Optimal Allocation of Generalized Power Sources in Distribution Network Based on Multi-Objective Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Li Ran

    2017-01-01

    Full Text Available Optimal allocation of generalized power sources in distribution network is researched. A simple index of voltage stability is put forward. Considering the investment and operation benefit, the stability of voltage and the pollution emissions of generalized power sources in distribution network, a multi-objective optimization planning model is established. A multi-objective particle swarm optimization algorithm is proposed to solve the optimal model. In order to improve the global search ability, the strategies of fast non-dominated sorting, elitism and crowding distance are adopted in this algorithm. Finally, tested the model and algorithm by IEEE-33 node system to find the best configuration of GP, the computed result shows that with the generalized power reasonable access to the active distribution network, the investment benefit and the voltage stability of the system is improved, and the proposed algorithm has better global search capability.

  9. Computer algorithms for automated detection and analysis of local Ca2+ releases in spontaneously beating cardiac pacemaker cells.

    Directory of Open Access Journals (Sweden)

    Alexander V Maltsev

    Full Text Available Local Ca2+ Releases (LCRs are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame sensitivity algorithm applied to each pixel (cell location. An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves, sparks and

  10. Glowworm swarm optimization theory, algorithms, and applications

    CERN Document Server

    Kaipa, Krishnanand N

    2017-01-01

    This book provides a comprehensive account of the glowworm swarm optimization (GSO) algorithm, including details of the underlying ideas, theoretical foundations, algorithm development, various applications, and MATLAB programs for the basic GSO algorithm. It also discusses several research problems at different levels of sophistication that can be attempted by interested researchers. The generality of the GSO algorithm is evident in its application to diverse problems ranging from optimization to robotics. Examples include computation of multiple optima, annual crop planning, cooperative exploration, distributed search, multiple source localization, contaminant boundary mapping, wireless sensor networks, clustering, knapsack, numerical integration, solving fixed point equations, solving systems of nonlinear equations, and engineering design optimization. The book is a valuable resource for researchers as well as graduate and undergraduate students in the area of swarm intelligence and computational intellige...

  11. A Survey of Sound Source Localization Methods in Wireless Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Maximo Cobos

    2017-01-01

    Full Text Available Wireless acoustic sensor networks (WASNs are formed by a distributed group of acoustic-sensing devices featuring audio playing and recording capabilities. Current mobile computing platforms offer great possibilities for the design of audio-related applications involving acoustic-sensing nodes. In this context, acoustic source localization is one of the application domains that have attracted the most attention of the research community along the last decades. In general terms, the localization of acoustic sources can be achieved by studying energy and temporal and/or directional features from the incoming sound at different microphones and using a suitable model that relates those features with the spatial location of the source (or sources of interest. This paper reviews common approaches for source localization in WASNs that are focused on different types of acoustic features, namely, the energy of the incoming signals, their time of arrival (TOA or time difference of arrival (TDOA, the direction of arrival (DOA, and the steered response power (SRP resulting from combining multiple microphone signals. Additionally, we discuss methods not only aimed at localizing acoustic sources but also designed to locate the nodes themselves in the network. Finally, we discuss current challenges and frontiers in this field.

  12. Localization of Point Sources for Poisson Equation using State Observers

    KAUST Repository

    Majeed, Muhammad Usman

    2016-08-09

    A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

  13. Localization of Point Sources for Poisson Equation using State Observers

    KAUST Repository

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2016-01-01

    A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

  14. Local multiplicative Schwarz algorithms for convection-diffusion equations

    Science.gov (United States)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  15. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  16. Efficient image enhancement using sparse source separation in the Retinex theory

    Science.gov (United States)

    Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik

    2017-11-01

    Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.

  17. Hazardous Source Estimation Using an Artificial Neural Network, Particle Swarm Optimization and a Simulated Annealing Algorithm

    NARCIS (Netherlands)

    Wang, Rongxiao; Chen, B.; Qiu, S.; Ma, Liang; Zhu, Zhengqiu; Wang, Yiping; Qiu, Xiaogang

    2018-01-01

    Locating and quantifying the emission source plays a significant role in the emergency management of hazardous gas leak accidents. Due to the lack of a desirable atmospheric dispersion model, current source estimation algorithms cannot meet the requirements of both accuracy and efficiency. In

  18. Localization and separation of acoustic sources by using a 2.5-dimensional circular microphone array.

    Science.gov (United States)

    Bai, Mingsian R; Lai, Chang-Sheng; Wu, Po-Chen

    2017-07-01

    Circular microphone arrays (CMAs) are sufficient in many immersive audio applications because azimuthal angles of sources are considered more important than the elevation angles in those occasions. However, the fact that CMAs do not resolve the elevation angle well can be a limitation for some applications which involves three-dimensional sound images. This paper proposes a 2.5-dimensional (2.5-D) CMA comprised of a CMA and a vertical logarithmic-spacing linear array (LLA) on the top. In the localization stage, two delay-and-sum beamformers are applied to the CMA and the LLA, respectively. The direction of arrival (DOA) is estimated from the product of two array output signals. In the separation stage, Tikhonov regularization and convex optimization are employed to extract the source amplitudes on the basis of the estimated DOA. The extracted signals from two arrays are further processed by the normalized least-mean-square algorithm with the internal iteration to yield the source signal with improved quality. To validate the 2.5-D CMA experimentally, a three-dimensionally printed circular array comprised of a 24-element CMA and an eight-element LLA is constructed. Objective perceptual evaluation of speech quality test and a subjective listening test are also undertaken.

  19. A localized navigation algorithm for Radiation Evasion for nuclear facilities. Part II: Optimizing the “Nearest Exit” Criterion

    Energy Technology Data Exchange (ETDEWEB)

    Khasawneh, Mohammed A., E-mail: mkha@ieee.org [Department of Electrical Engineering, Jordan University of Science and Technology (Jordan); Al-Shboul, Zeina Aman M., E-mail: xeinaaman@gmail.com [Department of Electrical Engineering, Jordan University of Science and Technology (Jordan); Jaradat, Mohammad A., E-mail: majaradat@just.edu.jo [Department of Mechanical Engineering, Jordan University of Science and Technology (Jordan); Malkawi, Mohammad I., E-mail: mmalkawi@aimws.com [College of Engineering, Jadara University, Irbid 221 10 (Jordan)

    2013-06-15

    Highlights: ► A new navigation algorithm for Radiation Evasion around nuclear facilities. ► An optimization criteria minimized under algorithm operation. ► A man-borne device guiding the occupational worker towards paths that warrant least radiation × time products. ► Benefits of using localized navigation as opposed to global navigation schemas. ► A path discrimination function for finding the navigational paths exhibiting the least amounts of radiation. -- Abstract: In this extension from part I (Khasawneh et al., in press), we modify the navigation algorithm which was presented with the objective of optimizing the “Radiation Evasion” Criterion so that navigation would optimize the criterion of “Nearest Exit”. Under this modification, algorithm would yield navigation paths that would guide occupational workers towards Nearest Exit points. Again, under this optimization criterion, algorithm leverages the use of localized information acquired through a well designed and distributed wireless sensor network, as it averts the need for any long-haul communication links or centralized decision and monitoring facility thereby achieving a more reliable performance under dynamic environments. As was done in part I, the proposed algorithm under the “Nearest Exit” Criterion is designed to leverage nearest neighbor information coming in through the sensory network overhead, in computing successful navigational paths from one point to another. For comparison purposes, the proposed algorithm is tested under the two optimization criteria: “Radiation Evasion” and “Nearest Exit”, for different numbers of step look-ahead. We verify the performance of the algorithm by means of simulations, whereby navigational paths are calculated for different radiation fields. We, via simulations, also, verify the performance of the algorithm in comparison with a well-known global navigation algorithm upon which we draw our conclusions.

  20. A localized navigation algorithm for Radiation Evasion for nuclear facilities. Part II: Optimizing the “Nearest Exit” Criterion

    International Nuclear Information System (INIS)

    Khasawneh, Mohammed A.; Al-Shboul, Zeina Aman M.; Jaradat, Mohammad A.; Malkawi, Mohammad I.

    2013-01-01

    Highlights: ► A new navigation algorithm for Radiation Evasion around nuclear facilities. ► An optimization criteria minimized under algorithm operation. ► A man-borne device guiding the occupational worker towards paths that warrant least radiation × time products. ► Benefits of using localized navigation as opposed to global navigation schemas. ► A path discrimination function for finding the navigational paths exhibiting the least amounts of radiation. -- Abstract: In this extension from part I (Khasawneh et al., in press), we modify the navigation algorithm which was presented with the objective of optimizing the “Radiation Evasion” Criterion so that navigation would optimize the criterion of “Nearest Exit”. Under this modification, algorithm would yield navigation paths that would guide occupational workers towards Nearest Exit points. Again, under this optimization criterion, algorithm leverages the use of localized information acquired through a well designed and distributed wireless sensor network, as it averts the need for any long-haul communication links or centralized decision and monitoring facility thereby achieving a more reliable performance under dynamic environments. As was done in part I, the proposed algorithm under the “Nearest Exit” Criterion is designed to leverage nearest neighbor information coming in through the sensory network overhead, in computing successful navigational paths from one point to another. For comparison purposes, the proposed algorithm is tested under the two optimization criteria: “Radiation Evasion” and “Nearest Exit”, for different numbers of step look-ahead. We verify the performance of the algorithm by means of simulations, whereby navigational paths are calculated for different radiation fields. We, via simulations, also, verify the performance of the algorithm in comparison with a well-known global navigation algorithm upon which we draw our conclusions

  1. Acoustic Source Localization via Subspace Based Method Using Small Aperture MEMS Arrays

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2014-01-01

    Full Text Available Small aperture microphone arrays provide many advantages for portable devices and hearing aid equipment. In this paper, a subspace based localization method is proposed for acoustic source using small aperture arrays. The effects of array aperture on localization are analyzed by using array response (array manifold. Besides array aperture, the frequency of acoustic source and the variance of signal power are simulated to demonstrate how to optimize localization performance, which is carried out by introducing frequency error with the proposed method. The proposed method for 5 mm array aperture is validated by simulations and experiments with MEMS microphone arrays. Different types of acoustic sources can be localized with the highest precision of 6 degrees even in the presence of wind noise and other noises. Furthermore, the proposed method reduces the computational complexity compared with other methods.

  2. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  3. Local pursuit strategy-inspired cooperative trajectory planning algorithm for a class of nonlinear constrained dynamical systems

    Science.gov (United States)

    Xu, Yunjun; Remeikas, Charles; Pham, Khanh

    2014-03-01

    Cooperative trajectory planning is crucial for networked vehicles to respond rapidly in cluttered environments and has a significant impact on many applications such as air traffic or border security monitoring and assessment. One of the challenges in cooperative planning is to find a computationally efficient algorithm that can accommodate both the complexity of the environment and real hardware and configuration constraints of vehicles in the formation. Inspired by a local pursuit strategy observed in foraging ants, feasible and optimal trajectory planning algorithms are proposed in this paper for a class of nonlinear constrained cooperative vehicles in environments with densely populated obstacles. In an iterative hierarchical approach, the local behaviours, such as the formation stability, obstacle avoidance, and individual vehicle's constraints, are considered in each vehicle's (i.e. follower's) decentralised optimisation. The cooperative-level behaviours, such as the inter-vehicle collision avoidance, are considered in the virtual leader's centralised optimisation. Early termination conditions are derived to reduce the computational cost by not wasting time in the local-level optimisation if the virtual leader trajectory does not satisfy those conditions. The expected advantages of the proposed algorithms are (1) the formation can be globally asymptotically maintained in a decentralised manner; (2) each vehicle decides its local trajectory using only the virtual leader and its own information; (3) the formation convergence speed is controlled by one single parameter, which makes it attractive for many practical applications; (4) nonlinear dynamics and many realistic constraints, such as the speed limitation and obstacle avoidance, can be easily considered; (5) inter-vehicle collision avoidance can be guaranteed in both the formation transient stage and the formation steady stage; and (6) the computational cost in finding both the feasible and optimal

  4. CMASA: an accurate algorithm for detecting local protein structural similarity and its application to enzyme catalytic site annotation

    Directory of Open Access Journals (Sweden)

    Li Gong-Hua

    2010-08-01

    Full Text Available Abstract Background The rapid development of structural genomics has resulted in many "unknown function" proteins being deposited in Protein Data Bank (PDB, thus, the functional prediction of these proteins has become a challenge for structural bioinformatics. Several sequence-based and structure-based methods have been developed to predict protein function, but these methods need to be improved further, such as, enhancing the accuracy, sensitivity, and the computational speed. Here, an accurate algorithm, the CMASA (Contact MAtrix based local Structural Alignment algorithm, has been developed to predict unknown functions of proteins based on the local protein structural similarity. This algorithm has been evaluated by building a test set including 164 enzyme families, and also been compared to other methods. Results The evaluation of CMASA shows that the CMASA is highly accurate (0.96, sensitive (0.86, and fast enough to be used in the large-scale functional annotation. Comparing to both sequence-based and global structure-based methods, not only the CMASA can find remote homologous proteins, but also can find the active site convergence. Comparing to other local structure comparison-based methods, the CMASA can obtain the better performance than both FFF (a method using geometry to predict protein function and SPASM (a local structure alignment method; and the CMASA is more sensitive than PINTS and is more accurate than JESS (both are local structure alignment methods. The CMASA was applied to annotate the enzyme catalytic sites of the non-redundant PDB, and at least 166 putative catalytic sites have been suggested, these sites can not be observed by the Catalytic Site Atlas (CSA. Conclusions The CMASA is an accurate algorithm for detecting local protein structural similarity, and it holds several advantages in predicting enzyme active sites. The CMASA can be used in large-scale enzyme active site annotation. The CMASA can be available by the

  5. Blind Source Separation Algorithms Using Hyperbolic and Givens Rotations for High-Order QAM Constellations

    KAUST Repository

    Shah, Syed Awais Wahab

    2017-11-24

    This paper addresses the problem of blind demixing of instantaneous mixtures in a multiple-input multiple-output communication system. The main objective is to present efficient blind source separation (BSS) algorithms dedicated to moderate or high-order QAM constellations. Four new iterative batch BSS algorithms are presented dealing with the multimodulus (MM) and alphabet matched (AM) criteria. For the optimization of these cost functions, iterative methods of Givens and hyperbolic rotations are used. A pre-whitening operation is also utilized to reduce the complexity of design problem. It is noticed that the designed algorithms using Givens rotations gives satisfactory performance only for large number of samples. However, for small number of samples, the algorithms designed by combining both Givens and hyperbolic rotations compensate for the ill-whitening that occurs in this case and thus improves the performance. Two algorithms dealing with the MM criterion are presented for moderate order QAM signals such as 16-QAM. The other two dealing with the AM criterion are presented for high-order QAM signals. These methods are finally compared with the state of art batch BSS algorithms in terms of signal-to-interference and noise ratio, symbol error rate and convergence rate. Simulation results show that the proposed methods outperform the contemporary batch BSS algorithms.

  6. Blind Source Separation Algorithms Using Hyperbolic and Givens Rotations for High-Order QAM Constellations

    KAUST Repository

    Shah, Syed Awais Wahab; Abed-Meraim, Karim; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of blind demixing of instantaneous mixtures in a multiple-input multiple-output communication system. The main objective is to present efficient blind source separation (BSS) algorithms dedicated to moderate or high-order QAM constellations. Four new iterative batch BSS algorithms are presented dealing with the multimodulus (MM) and alphabet matched (AM) criteria. For the optimization of these cost functions, iterative methods of Givens and hyperbolic rotations are used. A pre-whitening operation is also utilized to reduce the complexity of design problem. It is noticed that the designed algorithms using Givens rotations gives satisfactory performance only for large number of samples. However, for small number of samples, the algorithms designed by combining both Givens and hyperbolic rotations compensate for the ill-whitening that occurs in this case and thus improves the performance. Two algorithms dealing with the MM criterion are presented for moderate order QAM signals such as 16-QAM. The other two dealing with the AM criterion are presented for high-order QAM signals. These methods are finally compared with the state of art batch BSS algorithms in terms of signal-to-interference and noise ratio, symbol error rate and convergence rate. Simulation results show that the proposed methods outperform the contemporary batch BSS algorithms.

  7. Location of an electric source facility and local area promotion

    International Nuclear Information System (INIS)

    Shimohirao, Isao

    1999-01-01

    Here were described on energy demand and supply, energy policy and local area promotion policy for basic problems important on location of electric source facilities. At present, co-existence business between electricity business and electric source location area is lacking in its activity. It seems to be necessary to enforce some systems to intend to promote it earnestly, and to effort to promote industry promotions such as introduction of some national projects, induction of electricity cost reduction for a means of business invitation, and so forth. And it is necessary to promote them under cooperations with electricity businesses, governments, universities and communities for the industrial promotion and fixation of the youth at local areas. In order to realize such necessities, further larger efforts are expected for national and local governments. (G.K.)

  8. A Novel Method Based on Oblique Projection Technology for Mixed Sources Estimation

    Directory of Open Access Journals (Sweden)

    Weijian Si

    2014-01-01

    Full Text Available Reducing the computational complexity of the near-field sources and far-field sources localization algorithms has been considered as a serious problem in the field of array signal processing. A novel algorithm caring for mixed sources location estimation based on oblique projection is proposed in this paper. The sources are estimated at two different stages and the sensor noise power is estimated and eliminated from the covariance which improve the accuracy of the estimation of mixed sources. Using the idea of compress, the range information of near-field sources is obtained by searching the partial area instead of the whole Fresnel area which can reduce the processing time. Compared with the traditional algorithms, the proposed algorithm has the lower computation complexity and has the ability to solve the two closed-spaced sources with high resolution and accuracy. The duplication of range estimation is also avoided. Finally, simulation results are provided to demonstrate the performance of the proposed method.

  9. Matching pursuit and source deflation for sparse EEG/MEG dipole moment estimation.

    Science.gov (United States)

    Wu, Shun Chi; Swindlehurst, A Lee

    2013-08-01

    In this paper, we propose novel matching pursuit (MP)-based algorithms for EEG/MEG dipole source localization and parameter estimation for multiple measurement vectors with constant sparsity. The algorithms combine the ideas of MP for sparse signal recovery and source deflation, as employed in estimation via alternating projections. The source-deflated matching pursuit (SDMP) approach mitigates the problem of residual interference inherent in sequential MP-based methods or recursively applied (RAP)-MUSIC. Furthermore, unlike prior methods based on alternating projection, SDMP allows one to efficiently estimate the dipole orientation in addition to its location. Simulations show that the proposed algorithms outperform existing techniques under various conditions, including those with highly correlated sources. Results using real EEG data from auditory experiments are also presented to illustrate the performance of these algorithms.

  10. Hybridizing Evolutionary Algorithms with Opportunistic Local Search

    DEFF Research Database (Denmark)

    Gießen, Christian

    2013-01-01

    There is empirical evidence that memetic algorithms (MAs) can outperform plain evolutionary algorithms (EAs). Recently the first runtime analyses have been presented proving the aforementioned conjecture rigorously by investigating Variable-Depth Search, VDS for short (Sudholt, 2008). Sudholt...

  11. NSGA-II Algorithm with a Local Search Strategy for Multiobjective Optimal Design of Dry-Type Air-Core Reactor

    Directory of Open Access Journals (Sweden)

    Chengfen Zhang

    2015-01-01

    Full Text Available Dry-type air-core reactor is now widely applied in electrical power distribution systems, for which the optimization design is a crucial issue. In the optimization design problem of dry-type air-core reactor, the objectives of minimizing the production cost and minimizing the operation cost are both important. In this paper, a multiobjective optimal model is established considering simultaneously the two objectives of minimizing the production cost and minimizing the operation cost. To solve the multi-objective optimization problem, a memetic evolutionary algorithm is proposed, which combines elitist nondominated sorting genetic algorithm version II (NSGA-II with a local search strategy based on the covariance matrix adaptation evolution strategy (CMA-ES. NSGA-II can provide decision maker with flexible choices among the different trade-off solutions, while the local-search strategy, which is applied to nondominated individuals randomly selected from the current population in a given generation and quantity, can accelerate the convergence speed. Furthermore, another modification is that an external archive is set in the proposed algorithm for increasing the evolutionary efficiency. The proposed algorithm is tested on a dry-type air-core reactor made of rectangular cross-section litz-wire. Simulation results show that the proposed algorithm has high efficiency and it converges to a better Pareto front.

  12. Facilitating Follow-up of LIGO–Virgo Events Using Rapid Sky Localization

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsin-Yu [Department of Astronomy and Astrophysics and Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Holz, Daniel E. [Enrico Fermi Institute, Department of Physics, Department of Astronomy and Astrophysics, and Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States)

    2017-05-10

    We discuss an algorithm for accurate and very low-latency (<1 s) localization of gravitational-wave (GW) sources using only the relative times of arrival, relative phases, and relative signal-to-noise ratios for pairs of detectors. The algorithm is independent of distances and masses to leading order, and can be generalized to all discrete (as opposed to stochastic and continuous) sources detected by ground-based detector networks. Our approach is similar to that of BAYESTAR with a few modifications, which result in increased computational efficiency. For the LIGO two-detector configuration (Hanford+Livingston) operating in O1 we find a median 50% (90%) localization of 143 deg{sup 2} (558 deg{sup 2}) for binary neutron stars. We use our algorithm to explore the improvement in localization resulting from loud events, finding that the loudest out of the first 4 (or 10) events reduces the median sky-localization area by a factor of 1.9 (3.0) for the case of two GW detectors, and 2.2 (4.0) for three detectors. We also consider the case of multi-messenger joint detections in both the gravitational and the electromagnetic radiation, and show that joint localization can offer significant improvements (e.g., in the case of LIGO and Fermi /GBM joint detections). We show that a prior on the binary inclination, potentially arising from GRB observations, has a negligible effect on GW localization. Our algorithm is simple, fast, and accurate, and may be of particular utility in the development of multi-messenger astronomy.

  13. Synthesis of blind source separation algorithms on reconfigurable FPGA platforms

    Science.gov (United States)

    Du, Hongtao; Qi, Hairong; Szu, Harold H.

    2005-03-01

    Recent advances in intelligence technology have boosted the development of micro- Unmanned Air Vehicles (UAVs) including Sliver Fox, Shadow, and Scan Eagle for various surveillance and reconnaissance applications. These affordable and reusable devices have to fit a series of size, weight, and power constraints. Cameras used on such micro-UAVs are therefore mounted directly at a fixed angle without any motion-compensated gimbals. This mounting scheme has resulted in the so-called jitter effect in which jitter is defined as sub-pixel or small amplitude vibrations. The jitter blur caused by the jitter effect needs to be corrected before any other processing algorithms can be practically applied. Jitter restoration has been solved by various optimization techniques, including Wiener approximation, maximum a-posteriori probability (MAP), etc. However, these algorithms normally assume a spatial-invariant blur model that is not the case with jitter blur. Szu et al. developed a smart real-time algorithm based on auto-regression (AR) with its natural generalization of unsupervised artificial neural network (ANN) learning to achieve restoration accuracy at the sub-pixel level. This algorithm resembles the capability of the human visual system, in which an agreement between the pair of eyes indicates "signal", otherwise, the jitter noise. Using this non-statistical method, for each single pixel, a deterministic blind sources separation (BSS) process can then be carried out independently based on a deterministic minimum of the Helmholtz free energy with a generalization of Shannon's information theory applied to open dynamic systems. From a hardware implementation point of view, the process of jitter restoration of an image using Szu's algorithm can be optimized by pixel-based parallelization. In our previous work, a parallelly structured independent component analysis (ICA) algorithm has been implemented on both Field Programmable Gate Array (FPGA) and Application

  14. General-purpose parallel algorithm based on CUDA for source pencils' deployment of large γ irradiator

    International Nuclear Information System (INIS)

    Yang Lei; Gong Xueyu; Wang Ling

    2013-01-01

    Combined with standard mathematical model for evaluating quality of deploying results, a new high-performance parallel algorithm for source pencils' deployment was obtained by using parallel plant growth simulation algorithm which was completely parallelized with CUDA execute model, and the corresponding code can run on GPU. Based on such work, several instances in various scales were used to test the new version of algorithm. The results show that, based on the advantage of old versions. the performance of new one is improved more than 500 times comparing with the CPU version, and also 30 times with the CPU plus GPU hybrid version. The computation time of new version is less than ten minutes for the irradiator of which the activity is less than 111 PBq. For a single GTX275 GPU, the maximum computing power of new version is no more than 167 PBq as well as the computation time is no more than 25 minutes, and for multiple GPUs, the power can be improved more. Overall, the new version of algorithm running on GPU can satisfy the requirement of source pencils' deployment of any domestic irradiator, and it is of high competitiveness. (authors)

  15. On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling

    DEFF Research Database (Denmark)

    Neumann, Frank; Witt, Carsten

    2015-01-01

    combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very...

  16. Model parameter estimations from residual gravity anomalies due to simple-shaped sources using Differential Evolution Algorithm

    Science.gov (United States)

    Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil

    2016-06-01

    An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of

  17. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    Science.gov (United States)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  18. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    Science.gov (United States)

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method

  19. Acoustic sources of opportunity in the marine environment - Applied to source localization and ocean sensing

    Science.gov (United States)

    Verlinden, Christopher M.

    Controlled acoustic sources have typically been used for imaging the ocean. These sources can either be used to locate objects or characterize the ocean environment. The processing involves signal extraction in the presence of ambient noise, with shipping being a major component of the latter. With the advent of the Automatic Identification System (AIS) which provides accurate locations of all large commercial vessels, these major noise sources can be converted from nuisance to beacons or sources of opportunity for the purpose of studying the ocean. The source localization method presented here is similar to traditional matched field processing, but differs in that libraries of data-derived measured replicas are used in place of modeled replicas. In order to account for differing source spectra between library and target vessels, cross-correlation functions are compared instead of comparing acoustic signals directly. The library of measured cross-correlation function replicas is extrapolated using waveguide invariant theory to fill gaps between ship tracks, fully populating the search grid with estimated replicas allowing for continuous tracking. In addition to source localization, two ocean sensing techniques are discussed in this dissertation. The feasibility of estimating ocean sound speed and temperature structure, using ship noise across a drifting volumetric array of hydrophones suspended beneath buoys, in a shallow water marine environment is investigated. Using the attenuation of acoustic energy along eigenray paths to invert for ocean properties such as temperature, salinity, and pH is also explored. In each of these cases, the theory is developed, tested using numerical simulations, and validated with data from acoustic field experiments.

  20. Multi-scale spatial modeling of human exposure from local sources to global intake

    DEFF Research Database (Denmark)

    Wannaz, Cedric; Fantke, Peter; Jolliet, Olivier

    2018-01-01

    Exposure studies, used in human health risk and impact assessments of chemicals are largely performed locally or regionally. It is usually not known how global impacts resulting from exposure to point source emissions compare to local impacts. To address this problem, we introduce Pangea......, an innovative multi-scale, spatial multimedia fate and exposure assessment model. We study local to global population exposure associated with emissions from 126 point sources matching locations of waste-to-energy plants across France. Results for three chemicals with distinct physicochemical properties...... occur within a 100 km radius from the source. This suggests that, by neglecting distant low-level exposure, local assessments might only account for fractions of global cumulative intakes. We also study ~10,000 emission locations covering France more densely to determine per chemical and exposure route...

  1. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System

    Directory of Open Access Journals (Sweden)

    Miao Sun

    2016-06-01

    Full Text Available We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  2. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  3. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  4. Subjective Response to Foot-Fall Noise, Including Localization of the Source Position

    DEFF Research Database (Denmark)

    Brunskog, Jonas; Hwang, Ha Dong; Jeong, Cheol-Ho

    2011-01-01

    annoyance, using simulated binaural room impulse responses, with sources being a moving point source or a nonmoving surface source, and rooms being a room with a reverberation time of 0.5 s or an anechoic room. The paper concludes that no strong effect of the source localization on the annoyance can...

  5. Development of simulators algorithms of planar radioactive sources for use in computer models of exposure

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson; Leal Neto, Viriato; Lima Filho, Jose de Melo; Lima, Fernando Roberto de Andrade

    2013-01-01

    This paper presents as algorithm of a planar and isotropic radioactive source and by rotating the probability density function (PDF) Gaussian standard subjected to a translatory method which displaces its maximum throughout its field changes its intensity and makes the dispersion around the mean right asymmetric. The algorithm was used to generate samples of photons emerging from a plane and reach a semicircle involving a phantom voxels. The PDF describing this problem is already known, but the generating function of random numbers (FRN) associated with it can not be deduced by direct MC techniques. This is a significant problem because it can be adjusted to simulations involving natural terrestrial radiation or accidents in medical establishments or industries where the radioactive material spreads in a plane. Some attempts to obtain a FRN for the PDF of the problem have already been implemented by the Research Group in Numerical Dosimetry (GND) from Recife-PE, Brazil, always using the technique rejection sampling MC. This article followed methodology of previous work, except on one point: The problem of the PDF was replaced by a normal PDF transferred. To perform dosimetric comparisons, we used two MCES: the MSTA (Mash standing, composed by the adult male voxel phantom in orthostatic position, MASH (male mesh), available from the Department of Nuclear Energy (DEN) of the Federal University of Pernambuco (UFPE), coupled to MC EGSnrc code and the GND planar source based on the rejection technique) and MSTA N T. The two MCES are similar in all but FRN used in planar source. The results presented and discussed in this paper establish the new algorithm for a planar source to be used by GND

  6. Aerosol retrieval algorithm for the characterization of local aerosol using MODIS L1B data

    International Nuclear Information System (INIS)

    Wahab, A M; Sarker, M L R

    2014-01-01

    Atmospheric aerosol plays an important role in radiation budget, climate change, hydrology and visibility. However, it has immense effect on the air quality, especially in densely populated areas where high concentration of aerosol is associated with premature death and the decrease of life expectancy. Therefore, an accurate estimation of aerosol with spatial distribution is essential, and satellite data has increasingly been used to estimate aerosol optical depth (AOD). Aerosol product (AOD) from Moderate Resolution Imaging Spectroradiometer (MODIS) data is available at global scale but problems arise due to low spatial resolution, time-lag availability of AOD product as well as the use of generalized aerosol models in retrieval algorithm instead of local aerosol models. This study focuses on the aerosol retrieval algorithm for the characterization of local aerosol in Hong Kong for a long period of time (2006-2011) using high spatial resolution MODIS level 1B data (500 m resolution) and taking into account the local aerosol models. Two methods (dark dense vegetation and MODIS land surface reflectance product) were used for the estimation of the surface reflectance over land and Santa Barbara DISORT Radiative Transfer (SBDART) code was used to construct LUTs for calculating the aerosol reflectance as a function of AOD. Results indicate that AOD can be estimated at the local scale from high resolution MODIS data, and the obtained accuracy (ca. 87%) is very much comparable with the accuracy obtained from other studies (80%-95%) for AOD estimation

  7. A single frequency component-based re-estimated MUSIC algorithm for impact localization on complex composite structures

    International Nuclear Information System (INIS)

    Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng

    2015-01-01

    The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy. (paper)

  8. Source localization with an advanced gravitational wave detector network

    International Nuclear Information System (INIS)

    Fairhurst, Stephen

    2011-01-01

    We derive an expression for the accuracy with which sources can be localized using a network of gravitational wave detectors. The result is obtained via triangulation, using timing accuracies at each detector and is applicable to a network with any number of detectors. We use this result to investigate the ability of advanced gravitational wave detector networks to accurately localize signals from compact binary coalescences. We demonstrate that additional detectors can significantly improve localization results and illustrate our findings with networks comprised of the advanced LIGO, advanced Virgo and LCGT. In addition, we evaluate the benefits of relocating one of the advanced LIGO detectors to Australia.

  9. Improved Bevatron local injector ion source performance

    International Nuclear Information System (INIS)

    Stover, G.; Zajec, E.

    1985-05-01

    Performance tests of the improved Bevatron Local Injector PIG Ion Source using particles of Si 4 + , Ne 3 + , and He 2 + are described. Initial measurements of the 8.4 keV/nucleon Si 4 + beam show an intensity of 100 particle microamperes with a normalized emittance of .06 π cm-mrad. A low energy beam transport line provides mass analysis, diagnostics, and matching into a 200 MHz RFQ linac. The RFQ accelerates the beam from 8.4 to 200 keV/nucleon. The injector is unusual in the sense that all ion source power supplies, the ac distribution network, vacuum control equipment, and computer control system are contained in a four bay rack mounted on insulators which is located on a floor immediately above the ion source. The rack, transmission line, and the ion source housing are raised by a dc power supply to 80 kilovolts above earth ground. All power supplies, which are referenced to rack ground, are modular in construction and easily removable for maintenance. AC power is delivered to the rack via a 21 kVA, 3-phase transformer. 2 refs., 5 figs., 1 tab

  10. Effect of conductor geometry on source localization: Implications for epilepsy studies

    International Nuclear Information System (INIS)

    Schlitt, H.; Heller, L.; Best, E.; Ranken, D.; Aaron, R.

    1994-01-01

    We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we must first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head

  11. Medical image registration by combining global and local information: a chain-type diffeomorphic demons algorithm

    International Nuclear Information System (INIS)

    Liu, Xiaozheng; Yuan, Zhenming; Zhu, Junming; Xu, Dongrong

    2013-01-01

    The demons algorithm is a popular algorithm for non-rigid image registration because of its computational efficiency and simple implementation. The deformation forces of the classic demons algorithm were derived from image gradients by considering the deformation to decrease the intensity dissimilarity between images. However, the methods using the difference of image intensity for medical image registration are easily affected by image artifacts, such as image noise, non-uniform imaging and partial volume effects. The gradient magnitude image is constructed from the local information of an image, so the difference in a gradient magnitude image can be regarded as more reliable and robust for these artifacts. Then, registering medical images by considering the differences in both image intensity and gradient magnitude is a straightforward selection. In this paper, based on a diffeomorphic demons algorithm, we propose a chain-type diffeomorphic demons algorithm by combining the differences in both image intensity and gradient magnitude for medical image registration. Previous work had shown that the classic demons algorithm can be considered as an approximation of a second order gradient descent on the sum of the squared intensity differences. By optimizing the new dissimilarity criteria, we also present a set of new demons forces which were derived from the gradients of the image and gradient magnitude image. We show that, in controlled experiments, this advantage is confirmed, and yields a fast convergence. (paper)

  12. Source localization using a non-cocentered orthogonal loop and dipole (NCOLD) array

    Institute of Scientific and Technical Information of China (English)

    Liu Zhaoting; Xu Tongyang

    2013-01-01

    A uniform array of scalar-sensors with intersensor spacings over a large aperture size generally offers enhanced resolution and source localization accuracy, but it may also lead to cyclic ambiguity. By exploiting the polarization information of impinging waves, an electromagnetic vec-tor-sensor array outperforms the unpolarized scalar-sensor array in resolving this cyclic ambiguity. However, the electromagnetic vector-sensor array usually consists of cocentered orthogonal loops and dipoles (COLD), which is easily subjected to mutual coupling across these cocentered dipoles/loops. As a result, the source localization performance of the COLD array may substantially degrade rather than being improved. This paper proposes a new source localization method with a non-cocentered orthogonal loop and dipole (NCOLD) array. The NCOLD array contains only one dipole or loop on each array grid, and the intersensor spacings are larger than a half-wave-length. Therefore, unlike the COLD array, these well separated dipoles/loops minimize the mutual coupling effects and extend the spatial aperture as well. With the NCOLD array, the proposed method can efficiently exploit the polarization information to offer high localization precision.

  13. Parameter identification for continuous point emission source based on Tikhonov regularization method coupled with particle swarm optimization algorithm.

    Science.gov (United States)

    Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun

    2017-03-05

    In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission

  14. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    Science.gov (United States)

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  15. A method for detecting crack wave arrival time and crack localization in a tunnel by using moving window technique

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Young Chul; Park, Tae Jin [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    Source localization in a dispersive medium has been carried out based on the time-of-arrival-differences (TOADs) method: a triangulation method and a circle intersection technique. Recent signal processing advances have led to calculation TOAD using a joint time-frequency analysis of the signal, where a short-time Fourier transform(STFT) and wavelet transform can be included as popular algorithms. The time-frequency analysis method is able to provide various information and more reliable results such as seismic-attenuation estimation, dispersive characteristics, a wave mode analysis, and temporal energy distribution of signals compared with previous methods. These algorithms, however, have their own limitations for signal processing. In this paper, the effective use of proposed algorithm in detecting crack wave arrival time and source localization in rock masses suggest that the evaluation and real-time monitoring on the intensity of damages related to the tunnels or other underground facilities is possible. Calculation of variances resulted from moving windows as a function of their size differentiates the signature from noise and from crack signal, which lead us to determine the crack wave arrival time. Then, the source localization is determined to be where the variance of crack wave velocities from real and virtual crack localization becomes a minimum. To validate our algorithm, we have performed experiments at the tunnel, which resulted in successful determination of the wave arrival time and crack localization.

  16. Iterative local Chi2 alignment algorithm for the ATLAS Pixel detector

    CERN Document Server

    Göttfert, Tobias

    The existing local chi2 alignment approach for the ATLAS SCT detector was extended to the alignment of the ATLAS Pixel detector. This approach is linear, aligns modules separately, and uses distance of closest approach residuals and iterations. The derivation and underlying concepts of the approach are presented. To show the feasibility of the approach for Pixel modules, a simplified, stand-alone track simulation, together with the alignment algorithm, was developed with the ROOT analysis software package. The Pixel alignment software was integrated into Athena, the ATLAS software framework. First results and the achievable accuracy for this approach with a simulated dataset are presented.

  17. Olfactory source localization in the open field using one or both nostrils.

    Science.gov (United States)

    Welge-Lussen, A; Looser, G L; Westermann, B; Hummel, T

    2014-03-01

    This study aims to examine humans ́ abilities to localize odorants within the open field. Young participants were tested on a localization task using a relatively selective olfactory stimulus (2-phenylethyl-alcohol, PEA) and cineol, an odorant with a strong trigeminal component. Participants were blindfolded and had to localize an odorant source at 2 m distance (far-field condition) and a 0.4 m distance (near-field condition) with either two nostrils open or only one open nostril. For the odorant with trigeminal properties, the number of correct trials did not differ when one or both nostrils were used, while more PEA localization trials were correctly completed with both rather than one nostril. In the near-field condition, correct localization was possible in 72-80% of the trials, irrespective of the odorant and the number of nostrils used. Localization accuracy, measured as spatial deviation from the olfactory source, was significantly higher in the near-field compared to the far-field condition, but independent of the odorant being localized. Odorant localization within the open field is difficult, but possible. In contrast to the general view, humans seem to be able to exploit the two-nostril advantage with increasing task difficulty.

  18. Bayesian spatial filters for source signal extraction: a study in the peripheral nerve.

    Science.gov (United States)

    Tang, Y; Wodlinger, B; Durand, D M

    2014-03-01

    The ability to extract physiological source signals to control various prosthetics offer tremendous therapeutic potential to improve the quality of life for patients suffering from motor disabilities. Regardless of the modality, recordings of physiological source signals are contaminated with noise and interference along with crosstalk between the sources. These impediments render the task of isolating potential physiological source signals for control difficult. In this paper, a novel Bayesian Source Filter for signal Extraction (BSFE) algorithm for extracting physiological source signals for control is presented. The BSFE algorithm is based on the source localization method Champagne and constructs spatial filters using Bayesian methods that simultaneously maximize the signal to noise ratio of the recovered source signal of interest while minimizing crosstalk interference between sources. When evaluated over peripheral nerve recordings obtained in vivo, the algorithm achieved the highest signal to noise interference ratio ( 7.00 ±3.45 dB) amongst the group of methodologies compared with average correlation between the extracted source signal and the original source signal R = 0.93. The results support the efficacy of the BSFE algorithm for extracting source signals from the peripheral nerve.

  19. swLORETA: a novel approach to robust source localization and synchronization tomography

    International Nuclear Information System (INIS)

    Palmero-Soler, Ernesto; Dolan, Kevin; Hadamschek, Volker; Tass, Peter A

    2007-01-01

    Standardized low-resolution brain electromagnetic tomography (sLORETA) is a widely used technique for source localization. However, this technique still has some limitations, especially under realistic noisy conditions and in the case of deep sources. To overcome these problems, we present here swLORETA, an improved version of sLORETA, obtained by incorporating a singular value decomposition-based lead field weighting. We show that the precision of the source localization can further be improved by a tomographic phase synchronization analysis based on swLORETA. The phase synchronization analysis turns out to be superior to a standard linear coherence analysis, since the latter cannot distinguish between real phase locking and signal mixing

  20. A digital combining-weight estimation algorithm for broadband sources with the array feed compensation system

    Science.gov (United States)

    Vilnrotter, V. A.; Rodemich, E. R.

    1994-01-01

    An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.

  1. A density distribution algorithm for bone incorporating local orthotropy, modal analysis and theories of cellular solids.

    Science.gov (United States)

    Impelluso, Thomas J

    2003-06-01

    An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.

  2. Localization of the gamma-radiation sources using the gamma-visor

    Directory of Open Access Journals (Sweden)

    Ivanov Kirill E.

    2008-01-01

    Full Text Available The search of the main gamma-radiation sources at the site of the temporary storage of solid radioactive wastes was carried out. The relative absorbed dose rates were measured for some of the gamma-sources before and after the rehabilitation procedures. The effectiveness of the rehabilitation procedures in the years 2006-2007 was evaluated qualitatively and quantitatively. The decrease of radiation background at the site of the temporary storage of the solid radioactive wastes after the rehabilitation procedures allowed localizing the new gamma-source.

  3. Localization of the gamma-radiation sources using the gamma-visor

    International Nuclear Information System (INIS)

    Ivanov, K. E.; Ponomaryev-Stepnoi, N. N.; Stepennov, B. S.; Teterin, Y. A.; Teterin, A. Y.; Kharitonov, V. V.

    2008-01-01

    The search of the main gamma-radiation sources at the site of the temporary storage of solid radioactive wastes was carried out. The relative absorbed dose rates were measured for some of the gamma-sources before and after the rehabilitation procedures. The effectiveness of the rehabilitation procedures in the years 2006-2007 was evaluated qualitatively and quantitatively. The decrease of radiation background at the site of the temporary storage of the solid radioactive wastes after the rehabilitation procedures al lowed localizing the new gamma-source. (author)

  4. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    Energy Technology Data Exchange (ETDEWEB)

    Soufi, M [Shahid Beheshti University, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and

  5. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    International Nuclear Information System (INIS)

    Soufi, M; Asl, A Kamali; Geramifar, P

    2015-01-01

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm 3 . For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and

  6. An Efficient Two-Objective Hybrid Local Search Algorithm for Solving the Fuel Consumption Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    Weizhen Rao

    2016-01-01

    Full Text Available The classical model of vehicle routing problem (VRP generally minimizes either the total vehicle travelling distance or the total number of dispatched vehicles. Due to the increased importance of environmental sustainability, one variant of VRPs that minimizes the total vehicle fuel consumption has gained much attention. The resulting fuel consumption VRP (FCVRP becomes increasingly important yet difficult. We present a mixed integer programming model for the FCVRP, and fuel consumption is measured through the degree of road gradient. Complexity analysis of FCVRP is presented through analogy with the capacitated VRP. To tackle the FCVRP’s computational intractability, we propose an efficient two-objective hybrid local search algorithm (TOHLS. TOHLS is based on a hybrid local search algorithm (HLS that is also used to solve FCVRP. Based on the Golden CVRP benchmarks, 60 FCVRP instances are generated and tested. Finally, the computational results show that the proposed TOHLS significantly outperforms the HLS.

  7. Designing localized electromagnetic fields in a source-free space

    International Nuclear Information System (INIS)

    Borzdov, George N.

    2002-01-01

    An approach to characterizing and designing localized electromagnetic fields, based on the use of differentiable manifolds, differentiable mappings, and the group of rotation, is presented. By way of illustration, novel families of exact time-harmonic solutions to Maxwell's equations in the source-free space - localized fields defined by the rotation group - are obtained. The proposed approach provides a broad spectrum of tools to design localized fields, i.e., to build-in symmetry properties of oscillating electric and magnetic fields, to govern the distributions of their energy densities (both size and form of localization domains), and to set the structure of time-average energy fluxes. It is shown that localized fields can be combined as constructive elements to obtain a complex field structure with desirable properties, such as one-, two-, or three-dimensional field gratings. The proposed approach can be used in designing localized electromagnetic fields to govern motion and state of charged and neutral particles. As an example, motion of relativistic electrons in one-dimensional and three-dimensional field gratings is treated

  8. Local fractional variational iteration algorithm II for non-homogeneous model associated with the non-differentiable heat flow

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2015-10-01

    Full Text Available In this article, we begin with the non-homogeneous model for the non-differentiable heat flow, which is described using the local fractional vector calculus, from the first law of thermodynamics in fractal media point view. We employ the local fractional variational iteration algorithm II to solve the fractal heat equations. The obtained results show the non-differentiable behaviors of temperature fields of fractal heat flow defined on Cantor sets.

  9. Application of genetic algorithm for the simultaneous identification of atmospheric pollution sources

    Science.gov (United States)

    Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.

    2015-08-01

    A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.

  10. Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.

    Science.gov (United States)

    Huang, Cai; Mezencev, Roman; McDonald, John F; Vannberg, Fredrik

    2017-01-01

    Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM) algorithm combined with a standard recursive feature elimination (RFE) approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60). The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC) patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.

  11. Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.

    Directory of Open Access Journals (Sweden)

    Cai Huang

    Full Text Available Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM algorithm combined with a standard recursive feature elimination (RFE approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60. The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.

  12. Crowd-Sourced Mobility Mapping for Location Tracking Using Unlabeled Wi-Fi Simultaneous Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2015-01-01

    Full Text Available Due to the increasing requirements of the seamless and round-the-clock Location-based services (LBSs, a growing interest in Wi-Fi network aided location tracking is witnessed in the past decade. One of the significant problems of the conventional Wi-Fi location tracking approaches based on received signal strength (RSS fingerprinting is the time-consuming and labor intensive work involved in location fingerprint calibration. To solve this problem, a novel unlabeled Wi-Fi simultaneous localization and mapping (SLAM approach is developed to avoid the location fingerprinting and additional inertial or vision sensors. In this approach, an unlabeled mobility map of the coverage area is first constructed by using the crowd-sourcing from a batch of sporadically recorded Wi-Fi RSS sequences based on the spectral cluster assembling. Then, the sequence alignment algorithm is applied to conduct location tracking and mobility map updating. Finally, the effectiveness of this approach is verified by the extensive experiments carried out in a campus-wide area.

  13. Inversion of Atmospheric Tracer Measurements, Localization of Sources

    Science.gov (United States)

    Issartel, J.-P.; Cabrit, B.; Hourdin, F.; Idelkadi, A.

    When abnormal concentrations of a pollutant are observed in the atmosphere, the question of its origin arises immediately. The radioactivity from Tchernobyl was de- tected in Sweden before the accident was announced. This situation emphasizes the psychological, political and medical stakes of a rapid identification of sources. In tech- nical terms, most industrial sources can be modeled as a fixed point at ground level with undetermined duration. The classical method of identification involves the cal- culation of a backtrajectory departing from the detector with an upstream integration of the wind field. We were first involved in such questions as we evaluated the ef- ficiency of the international monitoring network planned in the frame of the Com- prehensive Test Ban Treaty. We propose a new approach of backtracking based upon the use of retroplumes associated to available measurements. Firstly the retroplume is related to inverse transport processes, describing quantitatively how the air in a sam- ple originates from regions that are all the more extended and diffuse as we go back far in the past. Secondly it clarifies the sensibility of the measurement with respect to all potential sources. It is therefore calculated by adjoint equations including of course diffusive processes. Thirdly, the statistical interpretation, valid as far as sin- gle particles are concerned, should not be used to investigate the position and date of a macroscopic source. In that case, the retroplume rather induces a straightforward constraint between the intensity of the source and its position. When more than one measurements are available, including zero valued measurements, the source satisfies the same number of linear relations tightly related to the retroplumes. This system of linear relations can be handled through the simplex algorithm in order to make the above intensity-position correlation more restrictive. This method enables to manage in a quantitative manner the

  14. Improved algorithm for surface display from volumetric data

    International Nuclear Information System (INIS)

    Lobregt, S.; Schaars, H.W.G.K.; OpdeBeek, J.C.A.; Zonneveld, F.W.

    1988-01-01

    A high-resolution surface display is produced from three-dimensional datasets (computed tomography or magnetic resonance imaging). Unlike other voxel-based methods, this algorithm does not show a cuberille surface structure, because the surface orientation is calculated from original gray values. The applied surface shading is a function of local orientation and position of the surface and of a virtual light source, giving a realistic impression of the surface of bone and soft tissue. The projection and shading are table driven, combining variable viewpoint and illumination conditions with speed. Other options are cutplane gray-level display and surface transparency. Combined with volume scanning, this algorithm offers powerful application possibilities

  15. Three-dimensional tomosynthetic image restoration for brachytherapy source localization

    International Nuclear Information System (INIS)

    Persons, Timothy M.

    2001-01-01

    Tomosynthetic image reconstruction allows for the production of a virtually infinite number of slices from a finite number of projection views of a subject. If the reconstructed image volume is viewed in toto, and the three-dimensional (3D) impulse response is accurately known, then it is possible to solve the inverse problem (deconvolution) using canonical image restoration methods (such as Wiener filtering or solution by conjugate gradient least squares iteration) by extension to three dimensions in either the spatial or the frequency domains. This dissertation presents modified direct and iterative restoration methods for solving the inverse tomosynthetic imaging problem in 3D. The significant blur artifact that is common to tomosynthetic reconstructions is deconvolved by solving for the entire 3D image at once. The 3D impulse response is computed analytically using a fiducial reference schema as realized in a robust, self-calibrating solution to generalized tomosynthesis. 3D modulation transfer function analysis is used to characterize the tomosynthetic resolution of the 3D reconstructions. The relevant clinical application of these methods is 3D imaging for brachytherapy source localization. Conventional localization schemes for brachytherapy implants using orthogonal or stereoscopic projection radiographs suffer from scaling distortions and poor visibility of implanted seeds, resulting in compromised source tracking (reported errors: 2-4 mm) and dosimetric inaccuracy. 3D image reconstruction (using a well-chosen projection sampling scheme) and restoration of a prostate brachytherapy phantom is used for testing. The approaches presented in this work localize source centroids with submillimeter error in two Cartesian dimensions and just over one millimeter error in the third

  16. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  17. land as a source of revenue mobilisation for local authorities in ghana

    African Journals Online (AJOL)

    Prince Acheampong

    available to the local authorities to raise money from their land resources. ... In Ghana, the Local Government Act, 1993 (Act 462) lists ten main sources of ..... The betterment levy is a tax, which has been little used – perhaps because it is ...

  18. Indoor localization using unsupervised manifold alignment with geometry perturbation

    KAUST Repository

    Majeed, Khaqan

    2014-04-01

    The main limitation of deploying/updating Received Signal Strength (RSS) based indoor localization is the construction of fingerprinted radio map, which is quite a hectic and time-consuming process especially when the indoor area is enormous and/or dynamic. Different approaches have been undertaken to reduce such deployment/update efforts, but the performance degrades when the fingerprinting load is reduced below a certain level. In this paper, we propose an indoor localization scheme that requires as low as 1% fingerprinting load. This scheme employs unsupervised manifold alignment that takes crowd sourced RSS readings and localization requests as source data set and the environment\\'s plan coordinates as destination data set. The 1% fingerprinting load is only used to perturb the local geometries in the destination data set. Our proposed algorithm was shown to achieve less than 5 m mean localization error with 1% fingerprinting load and a limited number of crowd sourced readings, when other learning based localization schemes pass the 10 m mean error with the same information.

  19. Indoor localization using unsupervised manifold alignment with geometry perturbation

    KAUST Repository

    Majeed, Khaqan; Sorour, Sameh; Al-Naffouri, Tareq Y.; Valaee, Shahrokh

    2014-01-01

    The main limitation of deploying/updating Received Signal Strength (RSS) based indoor localization is the construction of fingerprinted radio map, which is quite a hectic and time-consuming process especially when the indoor area is enormous and/or dynamic. Different approaches have been undertaken to reduce such deployment/update efforts, but the performance degrades when the fingerprinting load is reduced below a certain level. In this paper, we propose an indoor localization scheme that requires as low as 1% fingerprinting load. This scheme employs unsupervised manifold alignment that takes crowd sourced RSS readings and localization requests as source data set and the environment's plan coordinates as destination data set. The 1% fingerprinting load is only used to perturb the local geometries in the destination data set. Our proposed algorithm was shown to achieve less than 5 m mean localization error with 1% fingerprinting load and a limited number of crowd sourced readings, when other learning based localization schemes pass the 10 m mean error with the same information.

  20. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    Science.gov (United States)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  1. Novel applications of locally sourced montmorillonite (MMT) clay as ...

    African Journals Online (AJOL)

    This work explores the application of a locally sourced raw material, montmorillonite (MMT) clay, as a disintegrant in the formulation of an analgesic pharmaceutical product - paracetamol. The raw MMT was refined and treated with 0.IM NaCl to yield sodium montmorillonite (NaMMT) and the powder properties established in ...

  2. Optimization of distribution piping network in district cooling system using genetic algorithm with local search

    International Nuclear Information System (INIS)

    Chan, Apple L.S.; Hanby, Vic I.; Chow, T.T.

    2007-01-01

    A district cooling system is a sustainable means of distribution of cooling energy through mass production. A cooling medium like chilled water is generated at a central refrigeration plant and supplied to serve a group of consumer buildings through a piping network. Because of the substantial capital investment involved, an optimal design of the distribution piping configuration is one of the crucial factors for successful implementation of the district cooling scheme. In the present study, genetic algorithm (GA) incorporated with local search techniques was developed to find the optimal/near optimal configuration of the piping network in a hypothetical site. The effect of local search, mutation rate and frequency of local search on the performance of the GA in terms of both solution quality and computation time were investigated and presented in this paper

  3. Local flow management/profile descent algorithm. Fuel-efficient, time-controlled profiles for the NASA TSRV airplane

    Science.gov (United States)

    Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.

    1986-01-01

    The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.

  4. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression.

    Science.gov (United States)

    Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong

    2018-01-01

    Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.

  5. Chorus source region localization in the Earth's outer magnetosphere using THEMIS measurements

    Directory of Open Access Journals (Sweden)

    O. Agapitov

    2010-06-01

    Full Text Available Discrete ELF/VLF chorus emissions, the most intense electromagnetic plasma waves observed in the Earth's radiation belts and outer magnetosphere, are thought to propagate roughly along magnetic field lines from a localized source region near the magnetic equator towards the magnetic poles. THEMIS project Electric Field Instrument (EFI and Search Coil Magnetometer (SCM measurements were used to determine the spatial scale of the chorus source localization region on the day side of the Earth's outer magnetosphere. We present simultaneous observations of the same chorus elements registered onboard several THEMIS spacecraft in 2007 when all the spacecraft were in the same orbit. Discrete chorus elements were observed at 0.15–0.25 of the local electron gyrofrequency, which is typical for the outer magnetosphere. We evaluated the Poynting flux and wave vector distribution and obtained chorus wave packet quasi-parallel propagation to the local magnetic field. Amplitude and phase correlation data analysis allowed us to estimate the characteristic spatial correlation scale transverse to the local magnetic field to be in the 2800–3200 km range.

  6. A 3D Two-node and One-node HCMFD Algorithm for Pin-wise Reactor Analysis

    International Nuclear Information System (INIS)

    Kim, Jaeha; Kim, Yonghee

    2016-01-01

    To maximize the parallel computational efficiency, an iterative local-global strategy is adopted in the HCMFD algorithm. The global eigenvalue problem is solved by one-node CMFD, and the local fixed-source problems are solved by two-node CMFD based on the pin-wise nodal solutions. In such local-global scheme, the computational cost is mostly concentrated in solving the local problems while they can be solved in parallel so that a parallel computing can effectively be applied. Previously, the feasibility of the HCMFD algorithm was evaluated only in a 2-D scheme. In this paper, the 3D HCMFD algorithm with some possible variations in treating the axial direction is introduced. The HCMFD algorithm was successfully extended to a 3-D core analysis without any numerical instability even though the axial mesh size in local problems is quite different from the x-y node size. We have shown that 3D pin-wise core analysis can be done very effectively with the HCMFD framework. Additionally, it was demonstrated that parallel efficiency of the new 3D HCMFD scheme can be quite high on a simple OpenMP parallel architecture. It is concluded that the 3D HCMFD will enable an efficient pin-wise 3D core analysis

  7. An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation

    KAUST Repository

    Asiri, Sharefa M.; Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.

  8. An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation

    KAUST Repository

    Asiri, Sharefa M.

    2015-08-31

    Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.

  9. Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.

    Science.gov (United States)

    Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P

    2015-01-01

    Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.

  10. Simultaneous estimation of strength and position of a heat source in a participating medium using DE algorithm

    International Nuclear Information System (INIS)

    Parwani, Ajit K.; Talukdar, Prabal; Subbarao, P.M.V.

    2013-01-01

    An inverse heat transfer problem is discussed to estimate simultaneously the unknown position and timewise varying strength of a heat source by utilizing differential evolution approach. A two dimensional enclosure with isothermal and black boundaries containing non-scattering, absorbing and emitting gray medium is considered. Both radiation and conduction heat transfer are included. No prior information is used for the functional form of timewise varying strength of heat source. The finite volume method is used to solve the radiative transfer equation and the energy equation. In this work, instead of measured data, some temperature data required in the solution of the inverse problem are taken from the solution of the direct problem. The effect of measurement errors on the accuracy of estimation is examined by introducing errors in the temperature data of the direct problem. The prediction of source strength and its position by the differential evolution (DE) algorithm is found to be quite reasonable. -- Highlights: •Simultaneous estimation of strength and position of a heat source. •A conducting and radiatively participating medium is considered. •Implementation of differential evolution algorithm for such kind of problems. •Profiles with discontinuities can be estimated accurately. •No limitation in the determination of source strength at the final time

  11. XTALOPT version r11: An open-source evolutionary algorithm for crystal structure prediction

    Science.gov (United States)

    Avery, Patrick; Falls, Zackary; Zurek, Eva

    2018-01-01

    Version 11 of XTALOPT, an evolutionary algorithm for crystal structure prediction, has now been made available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. Whereas the previous versions of XTALOPT were published under the Gnu Public License (GPL), the current version is made available under the 3-Clause BSD License, which is an open source license that is recognized by the Open Source Initiative. Importantly, the new version can be executed via a command line interface (i.e., it does not require the use of a Graphical User Interface). Moreover, the new version is written as a stand-alone program, rather than an extension to AVOGADRO.

  12. Evaluation of smoothing in an iterative lp-norm minimization algorithm for surface-based source localization of MEG

    Science.gov (United States)

    Han, Jooman; Sic Kim, June; Chung, Chun Kee; Park, Kwang Suk

    2007-08-01

    The imaging of neural sources of magnetoencephalographic data based on distributed source models requires additional constraints on the source distribution in order to overcome ill-posedness and obtain a plausible solution. The minimum lp norm (0 temporal gyrus.

  13. EEG source localization in full-term newborns with hypoxic-ischemia

    NARCIS (Netherlands)

    Jennekens, W.; Dankers, F.; Blijham, P.; Cluitmans, P.; van Pul, C.; Andriessen, P.

    2013-01-01

    The aim of this study was to evaluate EEG source localization by standardized weighted low-resolution brain electromagnetic tomography (swLORETA) for monitoring of fullterm newborns with hypoxic-ischemic encephalopathy, using a standard anatomic head model. Three representative examples of neonatal

  14. Energy spectra unfolding of fast neutron sources using the group method of data handling and decision tree algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, Seyed Abolfazl, E-mail: sahosseini@sharif.edu [Department of Energy Engineering, Sharif University of Technology, Tehran 8639-11365 (Iran, Islamic Republic of); Afrakoti, Iman Esmaili Paeen [Faculty of Engineering & Technology, University of Mazandaran, Pasdaran Street, P.O. Box: 416, Babolsar 47415 (Iran, Islamic Republic of)

    2017-04-11

    Accurate unfolding of the energy spectrum of a neutron source gives important information about unknown neutron sources. The obtained information is useful in many areas like nuclear safeguards, nuclear nonproliferation, and homeland security. In the present study, the energy spectrum of a poly-energetic fast neutron source is reconstructed using the developed computational codes based on the Group Method of Data Handling (GMDH) and Decision Tree (DT) algorithms. The neutron pulse height distribution (neutron response function) in the considered NE-213 liquid organic scintillator has been simulated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). The developed computational codes based on the GMDH and DT algorithms use some data for training, testing and validation steps. In order to prepare the required data, 4000 randomly generated energy spectra distributed over 52 bins are used. The randomly generated energy spectra and the simulated neutron pulse height distributions by MCNPX-ESUT for each energy spectrum are used as the output and input data. Since there is no need to solve the inverse problem with an ill-conditioned response matrix, the unfolded energy spectrum has the highest accuracy. The {sup 241}Am-{sup 9}Be and {sup 252}Cf neutron sources are used in the validation step of the calculation. The unfolded energy spectra for the used fast neutron sources have an excellent agreement with the reference ones. Also, the accuracy of the unfolded energy spectra obtained using the GMDH is slightly better than those obtained from the DT. The results obtained in the present study have good accuracy in comparison with the previously published paper based on the logsig and tansig transfer functions. - Highlights: • The neutron pulse height distribution was simulated using MCNPX-ESUT. • The energy spectrum of the neutron source was unfolded using GMDH. • The energy spectrum of the neutron source was

  15. Effect of Brain-to-Skull Conductivity Ratio on EEG Source Localization Accuracy

    OpenAIRE

    Gang Wang; Doutian Ren

    2013-01-01

    The goal of this study was to investigate the influence of the brain-to-skull conductivity ratio (BSCR) on EEG source localization accuracy. In this study, we evaluated four BSCRs: 15, 20, 25, and 80, which were mainly discussed according to the literature. The scalp EEG signals were generated by BSCR-related forward computation for each cortical dipole source. Then, for each scalp EEG measurement, the source reconstruction was performed to identify the estimated dipole sources by the actual ...

  16. Hybrid Genetic Algorithm - Local Search Method for Ground-Water Management

    Science.gov (United States)

    Chiu, Y.; Nishikawa, T.; Martin, P.

    2008-12-01

    Ground-water management problems commonly are formulated as a mixed-integer, non-linear programming problem (MINLP). Relying only on conventional gradient-search methods to solve the management problem is computationally fast; however, the methods may become trapped in a local optimum. Global-optimization schemes can identify the global optimum, but the convergence is very slow when the optimal solution approaches the global optimum. In this study, we developed a hybrid optimization scheme, which includes a genetic algorithm and a gradient-search method, to solve the MINLP. The genetic algorithm identifies a near- optimal solution, and the gradient search uses the near optimum to identify the global optimum. Our methodology is applied to a conjunctive-use project in the Warren ground-water basin, California. Hi- Desert Water District (HDWD), the primary water-manager in the basin, plans to construct a wastewater treatment plant to reduce future septic-tank effluent from reaching the ground-water system. The treated wastewater instead will recharge the ground-water basin via percolation ponds as part of a larger conjunctive-use strategy, subject to State regulations (e.g. minimum distances and travel times). HDWD wishes to identify the least-cost conjunctive-use strategies that control ground-water levels, meet regulations, and identify new production-well locations. As formulated, the MINLP objective is to minimize water-delivery costs subject to constraints including pump capacities, available recharge water, water-supply demand, water-level constraints, and potential new-well locations. The methodology was demonstrated by an enumerative search of the entire feasible solution and comparing the optimum solution with results from the branch-and-bound algorithm. The results also indicate that the hybrid method identifies the global optimum within an affordable computation time. Sensitivity analyses, which include testing different recharge-rate scenarios, pond

  17. Bi-objective branch-and-cut algorithms

    DEFF Research Database (Denmark)

    Gadegaard, Sune Lauth; Ehrgott, Matthias; Nielsen, Lars Relund

    Most real-world optimization problems are of a multi-objective nature, involving objectives which are conflicting and incomparable. Solving a multi-objective optimization problem requires a method which can generate the set of rational compromises between the objectives. In this paper, we propose...... are strengthened by cutting planes. In addition, we suggest an extension of the branching strategy "Pareto branching''. Extensive computational results obtained for the bi-objective single source capacitated facility location problem prove the effectiveness of the algorithms....... and compares it to an upper bound set. The implicit bound set based algorithm, on the other hand, fathoms branching nodes by generating a single point on the lower bound set for each local nadir point. We outline several approaches for fathoming branching nodes and we propose an updating scheme for the lower...

  18. End User Perceptual Distorted Scenes Enhancement Algorithm Using Partition-Based Local Color Values for QoE-Guaranteed IPTV

    Science.gov (United States)

    Kim, Jinsul

    In this letter, we propose distorted scenes enhancement algorithm in order to provide end user perceptual QoE-guaranteed IPTV service. The block edge detection with weight factor and partition-based local color values method can be applied for the degraded video frames which are affected by network transmission errors such as out of order, jitter, and packet loss to improve QoE efficiently. Based on the result of quality metric after using the distorted scenes enhancement algorithm, the distorted scenes have been restored better than others.

  19. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    Science.gov (United States)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean

  20. Supercontinuum optimization for dual-soliton based light sources using genetic algorithms in a grid platform.

    Science.gov (United States)

    Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A

    2014-09-22

    We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.

  1. Closed-Form Algorithm for 3-D Near-Field OFDM Signal Localization under Uniform Circular Array.

    Science.gov (United States)

    Su, Xiaolong; Liu, Zhen; Chen, Xin; Wei, Xizhang

    2018-01-14

    Due to its widespread application in communications, radar, etc., the orthogonal frequency division multiplexing (OFDM) signal has become increasingly urgent in the field of localization. Under uniform circular array (UCA) and near-field conditions, this paper presents a closed-form algorithm based on phase difference for estimating the three-dimensional (3-D) location (azimuth angle, elevation angle, and range) of the OFDM signal. In the algorithm, considering that it is difficult to distinguish the frequency of the OFDM signal's subcarriers and the phase-based method is always affected by errors of the frequency estimation, this paper employs sparse representation (SR) to obtain the super-resolution frequencies and the corresponding phases of subcarriers. Further, as the phase differences of the adjacent sensors including azimuth angle, elevation angle and range parameters can be expressed as indefinite equations, the near-field OFDM signal's 3-D location is obtained by employing the least square method, where the phase differences are based on the average of the estimated subcarriers. Finally, the performance of the proposed algorithm is demonstrated by several simulations.

  2. Joint control algorithm in access network

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    To deal with long probing delay and inaccurate probing results in the endpoint admission control method,a joint local and end-to-end admission control algorithm is proposed,which introduces local probing of access network besides end-to-end probing.Through local probing,the algorithm accurately estimated the resource status of the access network.Simulation shows that this algorithm can improve admission control performance and reduce users' average waiting time when the access network is heavily loaded.

  3. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  4. Confidence range estimate of extended source imagery acquisition algorithms via computer simulations. [in optical communication systems

    Science.gov (United States)

    Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret

    1992-01-01

    Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.

  5. An application of locally linear model tree algorithm with combination of feature selection in credit scoring

    Science.gov (United States)

    Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad

    2014-10-01

    Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.

  6. A new method for quantifying the performance of EEG blind source separation algorithms by referencing a simultaneously recorded ECoG signal.

    Science.gov (United States)

    Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka

    2017-09-01

    Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Examination of the suitability of an implementation of the Jette localized heterogeneities fluence term L(1)(x,y,z) in an electron beam treatment planning algorithm

    Science.gov (United States)

    Rodebaugh, Raymond Francis, Jr.

    2000-11-01

    In this project we applied modifications of the Fermi- Eyges multiple scattering theory to attempt to achieve the goals of a fast, accurate electron dose calculation algorithm. The dose was first calculated for an ``average configuration'' based on the patient's anatomy using a modification of the Hogstrom algorithm. It was split into a measured central axis depth dose component based on the material between the source and the dose calculation point, and an off-axis component based on the physics of multiple coulomb scattering for the average configuration. The former provided the general depth dose characteristics along the beam fan lines, while the latter provided the effects of collimation. The Gaussian localized heterogeneities theory of Jette provided the lateral redistribution of the electron fluence by heterogeneities. Here we terminated Jette's infinite series of fluence redistribution terms after the second term. Experimental comparison data were collected for 1 cm thick x 1 cm diameter air and aluminum pillboxes using the Varian 2100C linear accelerator at Rush-Presbyterian- St. Luke's Medical Center. For an air pillbox, the algorithm results were in reasonable agreement with measured data at both 9 and 20 MeV. For the Aluminum pill box, there were significant discrepancies between the results of this algorithm and experiment. This was particularly apparent for the 9 MeV beam. Of course a one cm thick Aluminum heterogeneity is unlikely to be encountered in a clinical situation; the thickness, linear stopping power, and linear scattering power of Aluminum are all well above what would normally be encountered. We found that the algorithm is highly sensitive to the choice of the average configuration. This is an indication that the series of fluence redistribution terms does not converge fast enough to terminate after the second term. It also makes it difficult to apply the algorithm to cases where there are no a priori means of choosing the best average

  8. Dual channel rank-based intensity weighting for quantitative co-localization of microscopy images

    LENUS (Irish Health Repository)

    Singan, Vasanth R

    2011-10-21

    Abstract Background Accurate quantitative co-localization is a key parameter in the context of understanding the spatial co-ordination of molecules and therefore their function in cells. Existing co-localization algorithms consider either the presence of co-occurring pixels or correlations of intensity in regions of interest. Depending on the image source, and the algorithm selected, the co-localization coefficients determined can be highly variable, and often inaccurate. Furthermore, this choice of whether co-occurrence or correlation is the best approach for quantifying co-localization remains controversial. Results We have developed a novel algorithm to quantify co-localization that improves on and addresses the major shortcomings of existing co-localization measures. This algorithm uses a non-parametric ranking of pixel intensities in each channel, and the difference in ranks of co-localizing pixel positions in the two channels is used to weight the coefficient. This weighting is applied to co-occurring pixels thereby efficiently combining both co-occurrence and correlation. Tests with synthetic data sets show that the algorithm is sensitive to both co-occurrence and correlation at varying levels of intensity. Analysis of biological data sets demonstrate that this new algorithm offers high sensitivity, and that it is capable of detecting subtle changes in co-localization, exemplified by studies on a well characterized cargo protein that moves through the secretory pathway of cells. Conclusions This algorithm provides a novel way to efficiently combine co-occurrence and correlation components in biological images, thereby generating an accurate measure of co-localization. This approach of rank weighting of intensities also eliminates the need for manual thresholding of the image, which is often a cause of error in co-localization quantification. We envisage that this tool will facilitate the quantitative analysis of a wide range of biological data sets

  9. Beamspace fast fully adaptive brain source localization for limited data sequences

    International Nuclear Information System (INIS)

    Ravan, Maryam

    2017-01-01

    In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second order statistics often fail when the observations are taken over a short time interval, especially when the number of electrodes is large. To address this issue, in previous study, we developed a multistage adaptive processing called fast fully adaptive (FFA) approach that can significantly reduce the required sample support while still processing all available degrees of freedom (DOFs). This approach processes the observed data in stages through a decimation procedure. In this study, we introduce a new form of FFA approach called beamspace FFA. We first divide the brain into smaller regions and transform the measured data from the source space to the beamspace in each region. The FFA approach is then applied to the beamspaced data of each region. The goal of this modification is to benefit the correlation sensitivity reduction between sources in different brain regions. To demonstrate the performance of the beamspace FFA approach in the limited data scenario, simulation results with multiple deep and cortical sources as well as experimental results are compared with regular FFA and widely used FINE approaches. Both simulation and experimental results demonstrate that the beamspace FFA method can localize different types of multiple correlated brain sources in low signal to noise ratios more accurately with limited data. (paper)

  10. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression

    Directory of Open Access Journals (Sweden)

    Xu Yu

    2018-01-01

    Full Text Available Cross-domain collaborative filtering (CDCF solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR. We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.

  11. Development on advanced technology of local dosimetry for various radiation sources

    International Nuclear Information System (INIS)

    Odano, Naoteru; Ohnishi, Seiki; Ueki, Kohtaro

    2004-01-01

    The development aims at measuring local dose distribution accurately and handy and at enhancing precision of dose evaluation, so that personnel exposure can be reduced. A sheet type device and a sheet data reader were produced for trial and their performance testing were made under Sr-90 standard radiation and synchrotron radiation sources. Also a computer code was developed to analyze two-dimensional local dose distribution and to evaluate the precision of the sheet type dosimeter and data reader. The code enables to calculate local exposure doses of phantom quickly and simply for various beam irradiation conditions. (H. Yokoo)

  12. Absorption cooling sources atmospheric emissions decrease by implementation of simple algorithm for limiting temperature of cooling water

    Science.gov (United States)

    Wojdyga, Krzysztof; Malicki, Marcin

    2017-11-01

    Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.

  13. Iterated Local Search Algorithm with Strategic Oscillation for School Bus Routing Problem with Bus Stop Selection

    Directory of Open Access Journals (Sweden)

    Mohammad Saied Fallah Niasar

    2017-02-01

    Full Text Available he school bus routing problem (SBRP represents a variant of the well-known vehicle routing problem. The main goal of this study is to pick up students allocated to some bus stops and generate routes, including the selected stops, in order to carry students to school. In this paper, we have proposed a simple but effective metaheuristic approach that employs two features: first, it utilizes large neighborhood structures for a deeper exploration of the search space; second, the proposed heuristic executes an efficient transition between the feasible and infeasible portions of the search space. Exploration of the infeasible area is controlled by a dynamic penalty function to convert the unfeasible solution into a feasible one. Two metaheuristics, called N-ILS (a variant of the Nearest Neighbourhood with Iterated Local Search algorithm and I-ILS (a variant of Insertion with Iterated Local Search algorithm are proposed to solve SBRP. Our experimental procedure is based on the two data sets. The results show that N-ILS is able to obtain better solutions in shorter computing times. Additionally, N-ILS appears to be very competitive in comparison with the best existing metaheuristics suggested for SBRP

  14. Demonstration of acoustic source localization in air using single pixel compressive imaging

    Science.gov (United States)

    Rogers, Jeffrey S.; Rohde, Charles A.; Guild, Matthew D.; Naify, Christina J.; Martin, Theodore P.; Orris, Gregory J.

    2017-12-01

    Acoustic source localization often relies on large sensor arrays that can be electronically complex and have large data storage requirements to process element level data. Recently, the concept of a single-pixel-imager has garnered interest in the electromagnetics literature due to its ability to form high quality images with a single receiver paired with shaped aperture screens that allow for the collection of spatially orthogonal measurements. Here, we present a method for creating an acoustic analog to the single-pixel-imager found in electromagnetics for the purpose of source localization. Additionally, diffraction is considered to account for screen openings comparable to the acoustic wavelength. A diffraction model is presented and incorporated into the single pixel framework. In this paper, we explore the possibility of applying single pixel localization to acoustic measurements. The method is experimentally validated with laboratory measurements made in an air waveguide.

  15. Using ensemble models to identify and apportion heavy metal pollution sources in agricultural soils on a local scale

    International Nuclear Information System (INIS)

    Wang, Qi; Xie, Zhiyi; Li, Fangbai

    2015-01-01

    This study aims to identify and apportion multi-source and multi-phase heavy metal pollution from natural and anthropogenic inputs using ensemble models that include stochastic gradient boosting (SGB) and random forest (RF) in agricultural soils on the local scale. The heavy metal pollution sources were quantitatively assessed, and the results illustrated the suitability of the ensemble models for the assessment of multi-source and multi-phase heavy metal pollution in agricultural soils on the local scale. The results of SGB and RF consistently demonstrated that anthropogenic sources contributed the most to the concentrations of Pb and Cd in agricultural soils in the study region and that SGB performed better than RF. - Highlights: • Ensemble models including stochastic gradient boosting and random forest are used. • The models were verified by cross-validation and SGB performed better than RF. • Heavy metal pollution sources on a local scale are identified and apportioned. • Models illustrate good suitability in assessing sources in local-scale agricultural soils. • Anthropogenic sources contributed most to soil Pb and Cd pollution in our case. - Multi-source and multi-phase pollution by heavy metals in agricultural soils on a local scale were identified and apportioned.

  16. Fire Danger of Interaction Processes of Local Sources with a Limited Energy Capacity and Condensed Substances

    OpenAIRE

    Glushkov, Dmitry Olegovich; Strizhak, Pavel Alexandrovich; Vershinina, Kseniya Yurievna

    2015-01-01

    Numerical investigation of flammable interaction processes of local energy sources with liquid condensed substances has been carried out. Basic integrated characteristic values of process have been defined – ignition delay time at different energy sources parameters. Recommendations have been formulated to ensure fire safety of technological processes, characterized by possible local heat sources formation (cutting, welding, friction, metal grinding etc.) in the vicinity of storage areas, tra...

  17. Digital closed orbit feedback system for the advanced photon source storage ring

    International Nuclear Information System (INIS)

    Chung, Y.; Barr, D.; Decker, G.

    1995-01-01

    The Advanced Photon Source (APS) is a dedicated third-generation synchrotron light source with a nominal energy of 7 GeV and a circumference of 1104 m. The closed orbit feedback system for the APS storage ring employs unified global and local feedback systems for stabilization of particle and photon beams based on digital signal processing (DSP). Hardware and software aspects of the system will be described in this paper. In particular, we will discuss global and local orbit feedback algorithms, PID (proportional, integral, and derivative) control algorithm, application of digital signal processing to compensate for vacuum chamber eddy current effects, resolution of the interaction between global and local systems through decoupling, self-correction of the local bump closure error, user interface through the APS control system, and system performance in the frequency and time domains. The system hardware including the DSPs is distributed in 20 VME crates around the ring, and the entire feedback system runs synchronously at 4-kHz sampling frequency in order to achieve a correction bandwidth exceeding 100 Hz. The required data sharing between the global and local feedback systems is facilitated via the use of fiber-optically-networked reflective memories

  18. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements

    Science.gov (United States)

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.

    2018-06-01

    Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  19. Indoor Localization Algorithms for an Ambulatory Human Operated 3D Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    Nicholas Corso

    2013-12-01

    Full Text Available Indoor localization and mapping is an important problem with many applications such as emergency response, architectural modeling, and historical preservation. In this paper, we develop an automatic, off-line pipeline for metrically accurate, GPS-denied, indoor 3D mobile mapping using a human-mounted backpack system consisting of a variety of sensors. There are three novel contributions in our proposed mapping approach. First, we present an algorithm which automatically detects loop closure constraints from an occupancy grid map. In doing so, we ensure that constraints are detected only in locations that are well conditioned for scan matching. Secondly, we address the problem of scan matching with poor initial condition by presenting an outlier-resistant, genetic scan matching algorithm that accurately matches scans despite a poor initial condition. Third, we present two metrics based on the amount and complexity of overlapping geometry in order to vet the estimated loop closure constraints. By doing so, we automatically prevent erroneous loop closures from degrading the accuracy of the reconstructed trajectory. The proposed algorithms are experimentally verified using both controlled and real-world data. The end-to-end system performance is evaluated using 100 surveyed control points in an office environment and obtains a mean accuracy of 10 cm. Experimental results are also shown on three additional datasets from real world environments including a 1500 meter trajectory in a warehouse sized retail shopping center.

  20. Fiber optic distributed temperature sensing for fire source localization

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong

    2017-08-01

    A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.

  1. The localization of focal heart activity via body surface potential measurements: tests in a heterogeneous torso phantom

    International Nuclear Information System (INIS)

    Wetterling, F; Liehr, M; Haueisen, J; Schimpf, P; Liu, H

    2009-01-01

    The non-invasive localization of focal heart activity via body surface potential measurements (BSPM) could greatly benefit the understanding and treatment of arrhythmic heart diseases. However, the in vivo validation of source localization algorithms is rather difficult with currently available measurement techniques. In this study, we used a physical torso phantom composed of different conductive compartments and seven dipoles, which were placed in the anatomical position of the human heart in order to assess the performance of the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. Electric potentials were measured on the torso surface for single dipoles with and without further uncorrelated or correlated dipole activity. The localization error averaged 11 ± 5 mm over 22 dipoles, which shows the ability of RAP-MUSIC to distinguish an uncorrelated dipole from surrounding sources activity. For the first time, real computational modelling errors could be included within the validation procedure due to the physically modelled heterogeneities. In conclusion, the introduced heterogeneous torso phantom can be used to validate state-of-the-art algorithms under nearly realistic measurement conditions.

  2. The localization of focal heart activity via body surface potential measurements: tests in a heterogeneous torso phantom

    Science.gov (United States)

    Wetterling, F.; Liehr, M.; Schimpf, P.; Liu, H.; Haueisen, J.

    2009-09-01

    The non-invasive localization of focal heart activity via body surface potential measurements (BSPM) could greatly benefit the understanding and treatment of arrhythmic heart diseases. However, the in vivo validation of source localization algorithms is rather difficult with currently available measurement techniques. In this study, we used a physical torso phantom composed of different conductive compartments and seven dipoles, which were placed in the anatomical position of the human heart in order to assess the performance of the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. Electric potentials were measured on the torso surface for single dipoles with and without further uncorrelated or correlated dipole activity. The localization error averaged 11 ± 5 mm over 22 dipoles, which shows the ability of RAP-MUSIC to distinguish an uncorrelated dipole from surrounding sources activity. For the first time, real computational modelling errors could be included within the validation procedure due to the physically modelled heterogeneities. In conclusion, the introduced heterogeneous torso phantom can be used to validate state-of-the-art algorithms under nearly realistic measurement conditions.

  3. Automatic generation control of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2016-03-01

    Full Text Available This paper presents the design and analysis of Proportional-Integral-Double Derivative (PIDD controller for Automatic Generation Control (AGC of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization (TLBO algorithm. At first, a two-area reheat thermal power system with appropriate Generation Rate Constraint (GRC is considered. The design problem is formulated as an optimization problem and TLBO is employed to optimize the parameters of the PIDD controller. The superiority of the proposed TLBO based PIDD controller has been demonstrated by comparing the results with recently published optimization technique such as hybrid Firefly Algorithm and Pattern Search (hFA-PS, Firefly Algorithm (FA, Bacteria Foraging Optimization Algorithm (BFOA, Genetic Algorithm (GA and conventional Ziegler Nichols (ZN for the same interconnected power system. Also, the proposed approach has been extended to two-area power system with diverse sources of generation like thermal, hydro, wind and diesel units. The system model includes boiler dynamics, GRC and Governor Dead Band (GDB non-linearity. It is observed from simulation results that the performance of the proposed approach provides better dynamic responses by comparing the results with recently published in the literature. Further, the study is extended to a three unequal-area thermal power system with different controllers in each area and the results are compared with published FA optimized PID controller for the same system under study. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions in the range of ±25% from their nominal values to test the robustness.

  4. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    Science.gov (United States)

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  5. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    Directory of Open Access Journals (Sweden)

    Rasheda Arman Chowdhury

    Full Text Available Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG or Magneto-EncephaloGraphy (MEG signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i brain activity may be modeled using cortical parcels and (ii brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM and the Hierarchical Bayesian (HB source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2 to 30 cm(2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  6. Study of 201 Non-Small Cell Lung Cancer Patients Given Stereotactic Ablative Radiation Therapy Shows Local Control Dependence on Dose Calculation Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Latifi, Kujtim, E-mail: Kujtim.Latifi@Moffitt.org [Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida (United States); Oliver, Jasmine [Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida (United States); Department of Physics, University of South Florida, Tampa, Florida (United States); Baker, Ryan [University of South Florida School of Medicine, Tampa, Florida (United States); Dilling, Thomas J.; Stevens, Craig W. [Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida (United States); Kim, Jongphil; Yue, Binglin [Department of Biostatics and Bioinformatics, Moffitt Cancer Center, Tampa, Florida (United States); DeMarco, MaryLou; Zhang, Geoffrey G.; Moros, Eduardo G.; Feygelman, Vladimir [Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida (United States)

    2014-04-01

    Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible

  7. A Cluster-Based Fuzzy Fusion Algorithm for Event Detection in Heterogeneous Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    ZiQi Hao

    2015-01-01

    Full Text Available As limited energy is one of the tough challenges in wireless sensor networks (WSN, energy saving becomes important in increasing the lifecycle of the network. Data fusion enables combining information from several sources thus to provide a unified scenario, which can significantly save sensor energy and enhance sensing data accuracy. In this paper, we propose a cluster-based data fusion algorithm for event detection. We use k-means algorithm to form the nodes into clusters, which can significantly reduce the energy consumption of intracluster communication. Distances between cluster heads and event and energy of clusters are fuzzified, thus to use a fuzzy logic to select the clusters that will participate in data uploading and fusion. Fuzzy logic method is also used by cluster heads for local decision, and then the local decision results are sent to the base station. Decision-level fusion for final decision of event is performed by base station according to the uploaded local decisions and fusion support degree of clusters calculated by fuzzy logic method. The effectiveness of this algorithm is demonstrated by simulation results.

  8. GENERATING ACCURATE 3D MODELS OF ARCHITECTURAL HERITAGE STRUCTURES USING LOW-COST CAMERA AND OPEN SOURCE ALGORITHMS

    Directory of Open Access Journals (Sweden)

    M. Zacharek

    2017-05-01

    Full Text Available These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters, but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  9. Search for gamma-ray emitting AGN among unidentified Fermi-LAT sources using machine learning algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Doert, Marlene [Technische Universitaet Dortmund (Germany); Ruhr-Universitaet Bochum (Germany); Einecke, Sabrina [Technische Universitaet Dortmund (Germany); Errando, Manel [Barnard College, Columbia University, New York City (United States)

    2015-07-01

    The second Fermi-LAT source catalog (2FGL) is the deepest all-sky survey of the gamma-ray sky currently available to the community. Out of the 1873 catalog sources, 576 remain unassociated. We present a search for active galactic nuclei (AGN) among these unassociated objects, which aims at a reduction of the number of unassociated gamma-ray sources and a more complete characterization of the population of gamma-ray emitting AGN. Our study uses two complimentary machine learning algorithms which are individually trained on the gamma-ray properties of associated 2FGL sources and thereafter applied to the unassociated sample. The intersection of the two methods yields a high-confidence sample of 231 AGN candidate sources. We estimate the performance of the classification by taking inherent differences between the samples of associated and unassociated 2FGL sources into account. A search for infra-red counterparts and first results from follow-up studies in the X-ray band using Swift satellite data for a subset of our AGN candidates are also presented.

  10. Performance Evaluation of Block Acquisition and Tracking Algorithms Using an Open Source GPS Receiver Platform

    Science.gov (United States)

    Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.

    2011-01-01

    Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.

  11. A novel iris localization algorithm using correlation filtering

    Science.gov (United States)

    Pohit, Mausumi; Sharma, Jitu

    2015-06-01

    Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.

  12. Exploring three faint source detections methods for aperture synthesis radio images

    Science.gov (United States)

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  13. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    International Nuclear Information System (INIS)

    Kopka, P; Wawrzynczak, A; Borysiewicz, M

    2015-01-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found. (paper)

  14. The Single-Molecule Centroid Localization Algorithm Improves the Accuracy of Fluorescence Binding Assays.

    Science.gov (United States)

    Hua, Boyang; Wang, Yanbo; Park, Seongjin; Han, Kyu Young; Singh, Digvijay; Kim, Jin H; Cheng, Wei; Ha, Taekjip

    2018-03-13

    Here, we demonstrate that the use of the single-molecule centroid localization algorithm can improve the accuracy of fluorescence binding assays. Two major artifacts in this type of assay, i.e., nonspecific binding events and optically overlapping receptors, can be detected and corrected during analysis. The effectiveness of our method was confirmed by measuring two weak biomolecular interactions, the interaction between the B1 domain of streptococcal protein G and immunoglobulin G and the interaction between double-stranded DNA and the Cas9-RNA complex with limited sequence matches. This analysis routine requires little modification to common experimental protocols, making it readily applicable to existing data and future experiments.

  15. Parameter identification of piezoelectric hysteresis model based on improved artificial bee colony algorithm

    Science.gov (United States)

    Wang, Geng; Zhou, Kexin; Zhang, Yeming

    2018-04-01

    The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.

  16. Finding local communities in protein networks.

    Science.gov (United States)

    Voevodski, Konstantin; Teng, Shang-Hua; Xia, Yu

    2009-09-18

    Protein-protein interactions (PPIs) play fundamental roles in nearly all biological processes, and provide major insights into the inner workings of cells. A vast amount of PPI data for various organisms is available from BioGRID and other sources. The identification of communities in PPI networks is of great interest because they often reveal previously unknown functional ties between proteins. A large number of global clustering algorithms have been applied to protein networks, where the entire network is partitioned into clusters. Here we take a different approach by looking for local communities in PPI networks. We develop a tool, named Local Protein Community Finder, which quickly finds a community close to a queried protein in any network available from BioGRID or specified by the user. Our tool uses two new local clustering algorithms Nibble and PageRank-Nibble, which look for a good cluster among the most popular destinations of a short random walk from the queried vertex. The quality of a cluster is determined by proportion of outgoing edges, known as conductance, which is a relative measure particularly useful in undersampled networks. We show that the two local clustering algorithms find communities that not only form excellent clusters, but are also likely to be biologically relevant functional components. We compare the performance of Nibble and PageRank-Nibble to other popular and effective graph partitioning algorithms, and show that they find better clusters in the graph. Moreover, Nibble and PageRank-Nibble find communities that are more functionally coherent. The Local Protein Community Finder, accessible at http://xialab.bu.edu/resources/lpcf, allows the user to quickly find a high-quality community close to a queried protein in any network available from BioGRID or specified by the user. We show that the communities found by our tool form good clusters and are functionally coherent, making our application useful for biologists who wish to

  17. Finding local communities in protein networks

    Directory of Open Access Journals (Sweden)

    Teng Shang-Hua

    2009-09-01

    Full Text Available Abstract Background Protein-protein interactions (PPIs play fundamental roles in nearly all biological processes, and provide major insights into the inner workings of cells. A vast amount of PPI data for various organisms is available from BioGRID and other sources. The identification of communities in PPI networks is of great interest because they often reveal previously unknown functional ties between proteins. A large number of global clustering algorithms have been applied to protein networks, where the entire network is partitioned into clusters. Here we take a different approach by looking for local communities in PPI networks. Results We develop a tool, named Local Protein Community Finder, which quickly finds a community close to a queried protein in any network available from BioGRID or specified by the user. Our tool uses two new local clustering algorithms Nibble and PageRank-Nibble, which look for a good cluster among the most popular destinations of a short random walk from the queried vertex. The quality of a cluster is determined by proportion of outgoing edges, known as conductance, which is a relative measure particularly useful in undersampled networks. We show that the two local clustering algorithms find communities that not only form excellent clusters, but are also likely to be biologically relevant functional components. We compare the performance of Nibble and PageRank-Nibble to other popular and effective graph partitioning algorithms, and show that they find better clusters in the graph. Moreover, Nibble and PageRank-Nibble find communities that are more functionally coherent. Conclusion The Local Protein Community Finder, accessible at http://xialab.bu.edu/resources/lpcf, allows the user to quickly find a high-quality community close to a queried protein in any network available from BioGRID or specified by the user. We show that the communities found by our tool form good clusters and are functionally coherent

  18. A Rule-Based Local Search Algorithm for General Shift Design Problems in Airport Ground Handling

    DEFF Research Database (Denmark)

    Clausen, Tommy

    We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework with mul......We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework...... with multiple neighborhoods and a loosely coupled rule engine based on simulated annealing is presented. Computational experiments on real-life data from various airport ground handling organization show the performance and flexibility of the proposed algorithm....

  19. Get Your Atoms in Order--An Open-Source Implementation of a Novel and Robust Molecular Canonicalization Algorithm.

    Science.gov (United States)

    Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A

    2015-10-26

    Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.

  20. A novel artificial bee colony based clustering algorithm for categorical data.

    Science.gov (United States)

    Ji, Jinchao; Pang, Wei; Zheng, Yanlin; Wang, Zhe; Ma, Zhiqiang

    2015-01-01

    Data with categorical attributes are ubiquitous in the real world. However, existing partitional clustering algorithms for categorical data are prone to fall into local optima. To address this issue, in this paper we propose a novel clustering algorithm, ABC-K-Modes (Artificial Bee Colony clustering based on K-Modes), based on the traditional k-modes clustering algorithm and the artificial bee colony approach. In our approach, we first introduce a one-step k-modes procedure, and then integrate this procedure with the artificial bee colony approach to deal with categorical data. In the search process performed by scout bees, we adopt the multi-source search inspired by the idea of batch processing to accelerate the convergence of ABC-K-Modes. The performance of ABC-K-Modes is evaluated by a series of experiments in comparison with that of the other popular algorithms for categorical data.

  1. Application of Hybrid Genetic Algorithm Routine in Optimizing Food and Bioengineering Processes

    Directory of Open Access Journals (Sweden)

    Jaya Shankar Tumuluru

    2016-11-01

    Full Text Available Optimization is a crucial step in the analysis of experimental results. Deterministic methods only converge on local optimums and require exponentially more time as dimensionality increases. Stochastic algorithms are capable of efficiently searching the domain space; however convergence is not guaranteed. This article demonstrates the novelty of the hybrid genetic algorithm (HGA, which combines both stochastic and deterministic routines for improved optimization results. The new hybrid genetic algorithm developed is applied to the Ackley benchmark function as well as case studies in food, biofuel, and biotechnology processes. For each case study, the hybrid genetic algorithm found a better optimum candidate than reported by the sources. In the case of food processing, the hybrid genetic algorithm improved the anthocyanin yield by 6.44%. Optimization of bio-oil production using HGA resulted in a 5.06% higher yield. In the enzyme production process, HGA predicted a 0.39% higher xylanase yield. Hybridization of the genetic algorithm with a deterministic algorithm resulted in an improved optimum compared to statistical methods.

  2. Localization of sources of the hyperinsulinism through the image methods

    International Nuclear Information System (INIS)

    Abath, C.G.A.

    1990-01-01

    Pancreatic insulinomas are small tumours, manifested early by the high hormonal production. Microscopic changes, like islet cell hyperplasia or nesidioblastosis, are also sources of hyperinsulinism. The pre-operative localization of the lesions is important, avoiding unnecessary or insufficient blind pancreatectomies. It is presented the experience with 26 patients with hyperinsulinism, of whom six were examined by ultrasound, nine by computed tomography, 25 by angiography and 16 by pancreatic venous sampling for hormone assay, in order to localize the lesions. Percutaneous transhepatic portal and pancreatic vein catheterization with measurement of insuline concentrations was the most reliable and sensitive method for detecting the lesions, including those non-palpable during the surgical exploration (author)

  3. Localization of Simultaneous Moving Sound Sources for Mobile Robot Using a Frequency-Domain Steered Beamformer Approach

    OpenAIRE

    Valin, Jean-Marc; Michaud, François; Hadjou, Brahim; Rouat, Jean

    2016-01-01

    Mobile robots in real-life settings would benefit from being able to localize sound sources. Such a capability can nicely complement vision to help localize a person or an interesting event in the environment, and also to provide enhanced processing for other capabilities such as speech recognition. In this paper we present a robust sound source localization method in three-dimensional space using an array of 8 microphones. The method is based on a frequency-domain implementation of a steered...

  4. Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    Science.gov (United States)

    Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.

    2016-08-01

    Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.

  5. Stability and chaos of LMSER PCA learning algorithm

    International Nuclear Information System (INIS)

    Lv Jiancheng; Y, Zhang

    2007-01-01

    LMSER PCA algorithm is a principal components analysis algorithm. It is used to extract principal components on-line from input data. The algorithm has both stability and chaotic dynamic behavior under some conditions. This paper studies the local stability of the LMSER PCA algorithm via a corresponding deterministic discrete time system. Conditions for local stability are derived. The paper also explores the chaotic behavior of this algorithm. It shows that the LMSER PCA algorithm can produce chaos. Waveform plots, Lyapunov exponents and bifurcation diagrams are presented to illustrate the existence of chaotic behavior of this algorithm

  6. Time domain localization technique with sparsity constraint for imaging acoustic sources

    Science.gov (United States)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  7. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  8. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  9. Improved Tensor-Based Singular Spectrum Analysis Based on Single Channel Blind Source Separation Algorithm and Its Application to Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Dan Yang

    2017-04-01

    Full Text Available To solve the problem of multi-fault blind source separation (BSS in the case that the observed signals are under-determined, a novel approach for single channel blind source separation (SCBSS based on the improved tensor-based singular spectrum analysis (TSSA is proposed. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, TSSA method can be employed to extract the multi-fault features from the measured single-channel vibration signal. However, SCBSS based on TSSA still has some limitations, mainly including unsatisfactory convergence of TSSA in many cases and the number of source signals is hard to accurately estimate. Therefore, the improved TSSA algorithm based on canonical decomposition and parallel factors (CANDECOMP/PARAFAC weighted optimization, namely CP-WOPT, is proposed in this paper. CP-WOPT algorithm is applied to process the factor matrix using a first-order optimization approach instead of the original least square method in TSSA, so as to improve the convergence of this algorithm. In order to accurately estimate the number of the source signals in BSS, EMD-SVD-BIC (empirical mode decomposition—singular value decomposition—Bayesian information criterion method, instead of the SVD in the conventional TSSA, is introduced. To validate the proposed method, we applied it to the analysis of the numerical simulation signal and the multi-fault rolling bearing signals.

  10. Fire Danger of Interaction Processes of Local Sources with a Limited Energy Capacity and Condensed Substances

    Directory of Open Access Journals (Sweden)

    Glushkov Dmitrii O.

    2015-01-01

    Full Text Available Numerical investigation of flammable interaction processes of local energy sources with liquid condensed substances has been carried out. Basic integrated characteristic values of process have been defined – ignition delay time at different energy sources parameters. Recommendations have been formulated to ensure fire safety of technological processes, characterized by possible local heat sources formation (cutting, welding, friction, metal grinding etc. in the vicinity of storage areas, transportation, transfer and processing of flammable liquids (gasoline, kerosene, diesel fuel.

  11. Insulin in the brain: sources, localization and functions.

    Science.gov (United States)

    Ghasemi, Rasoul; Haeri, Ali; Dargahi, Leila; Mohamed, Zahurin; Ahmadiani, Abolhassan

    2013-02-01

    Historically, insulin is best known for its role in peripheral glucose homeostasis, and insulin signaling in the brain has received less attention. Insulin-independent brain glucose uptake has been the main reason for considering the brain as an insulin-insensitive organ. However, recent findings showing a high concentration of insulin in brain extracts, and expression of insulin receptors (IRs) in central nervous system tissues have gathered considerable attention over the sources, localization, and functions of insulin in the brain. This review summarizes the current status of knowledge of the peripheral and central sources of insulin in the brain, site-specific expression of IRs, and also neurophysiological functions of insulin including the regulation of food intake, weight control, reproduction, and cognition and memory formation. This review also considers the neuromodulatory and neurotrophic effects of insulin, resulting in proliferation, differentiation, and neurite outgrowth, introducing insulin as an attractive tool for neuroprotection against apoptosis, oxidative stress, beta amyloid toxicity, and brain ischemia.

  12. Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles.

    Science.gov (United States)

    Xing, Boyang; Zhu, Quanmin; Pan, Feng; Feng, Xiaoxue

    2018-05-25

    A novel multi-sensor fusion indoor localization algorithm based on ArUco marker is designed in this paper. The proposed ArUco mapping algorithm can build and correct the map of markers online with Grubbs criterion and K-mean clustering, which avoids the map distortion due to lack of correction. Based on the conception of multi-sensor information fusion, the federated Kalman filter is utilized to synthesize the multi-source information from markers, optical flow, ultrasonic and the inertial sensor, which can obtain a continuous localization result and effectively reduce the position drift due to the long-term loss of markers in pure marker localization. The proposed algorithm can be easily implemented in a hardware of one Raspberry Pi Zero and two STM32 micro controllers produced by STMicroelectronics (Geneva, Switzerland). Thus, a small-size and low-cost marker-based localization system is presented. The experimental results show that the speed estimation result of the proposed system is better than Px4flow, and it has the centimeter accuracy of mapping and positioning. The presented system not only gives satisfying localization precision, but also has the potential to expand other sensors (such as visual odometry, ultra wideband (UWB) beacon and lidar) to further improve the localization performance. The proposed system can be reliably employed in Micro Aerial Vehicle (MAV) visual localization and robotics control.

  13. Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles

    Directory of Open Access Journals (Sweden)

    Boyang Xing

    2018-05-01

    Full Text Available A novel multi-sensor fusion indoor localization algorithm based on ArUco marker is designed in this paper. The proposed ArUco mapping algorithm can build and correct the map of markers online with Grubbs criterion and K-mean clustering, which avoids the map distortion due to lack of correction. Based on the conception of multi-sensor information fusion, the federated Kalman filter is utilized to synthesize the multi-source information from markers, optical flow, ultrasonic and the inertial sensor, which can obtain a continuous localization result and effectively reduce the position drift due to the long-term loss of markers in pure marker localization. The proposed algorithm can be easily implemented in a hardware of one Raspberry Pi Zero and two STM32 micro controllers produced by STMicroelectronics (Geneva, Switzerland. Thus, a small-size and low-cost marker-based localization system is presented. The experimental results show that the speed estimation result of the proposed system is better than Px4flow, and it has the centimeter accuracy of mapping and positioning. The presented system not only gives satisfying localization precision, but also has the potential to expand other sensors (such as visual odometry, ultra wideband (UWB beacon and lidar to further improve the localization performance. The proposed system can be reliably employed in Micro Aerial Vehicle (MAV visual localization and robotics control.

  14. Variabilidade local e regional da evapotranspiração estimada pelo algoritmo SEBAL Local and regional variability of evapotranspiration estimated by SEBAL algorithm

    Directory of Open Access Journals (Sweden)

    Luis C. J. Moreira

    2010-12-01

    Full Text Available Em face da importância em conhecer a evapotranspiração (ET para uso racional da água na irrigação no contexto atual de escassez desse recurso, algoritmos de estimativa da ET a nível regional foram desenvolvidos utilizando-se de ferramentas de sensoriamento remoto. Este estudo objetivou aplicar o algoritmo SEBAL (Surface Energy Balance Algorithms for Land em três imagens do satélite Landsat 5, do segundo semestre de 2006. As imagens correspondem a áreas irrigadas, floresta nativa densa e a Caatinga do Estado do Ceará (Baixo Acaraú, Chapada do Apodi e Chapada do Araripe. Este algoritmo calcula a evapotranspiração horária a partir do fluxo de calor latente, estimado como resíduo do balanço de energia na superfície. Os valores de ET obtidos nas três regiões foram superiores a 0,60 mm h-1 nas áreas irrigadas ou de vegetação nativa densa. As áreas de vegetação nativa menos densa apresentaram taxa da ET horária de 0,35 a 0,60 mm h-1, e valores quase nulos em áreas degradadas. A análise das médias de evapotranspiração horária pelo teste de Tukey a 5% de probabilidade permitiu evidenciar uma variabilidade significativa local, bem como regional no Estado do Ceará.In the context of water resources scarcity, the rational use of water for irrigation is necessary, implying precise estimations of the actual evapotranspiration (ET. With the recent progresses of remote-sensed technologies, regional algorithms estimating evapotranspiration from satellite observations were developed. This work aimed at applying the SEBAL algorithm (Surface Energy Balance Algorithms for Land at three Landsat-5 images during the second semester of 2006. These images cover irrigated areas, dense native forest areas and caatinga areas in three regions of the state of Ceará (Baixo Acaraú, Chapada do Apodi and Chapada do Araripe. The SEBAL algorithm calculates the hourly evapotranspiration from the latent heat flux, estimated from the surface energy

  15. A Diagonal-Steering-Based Binaural Beamforming Algorithm Incorporating a Diagonal Speech Localizer for Persons With Bilateral Hearing Impairment.

    Science.gov (United States)

    Lee, Jun Chang; Nam, Kyoung Won; Jang, Dong Pyo; Kim, In Young

    2015-12-01

    Previously suggested diagonal-steering algorithms for binaural hearing support devices have commonly assumed that the direction of the speech signal is known in advance, which is not always the case in many real circumstances. In this study, a new diagonal-steering-based binaural speech localization (BSL) algorithm is proposed, and the performances of the BSL algorithm and the binaural beamforming algorithm, which integrates the BSL and diagonal-steering algorithms, were evaluated using actual speech-in-noise signals in several simulated listening scenarios. Testing sounds were recorded in a KEMAR mannequin setup and two objective indices, improvements in signal-to-noise ratio (SNRi ) and segmental SNR (segSNRi ), were utilized for performance evaluation. Experimental results demonstrated that the accuracy of the BSL was in the 90-100% range when input SNR was -10 to +5 dB range. The average differences between the γ-adjusted and γ-fixed diagonal-steering algorithms (for -15 to +5 dB input SNR) in the talking in the restaurant scenario were 0.203-0.937 dB for SNRi and 0.052-0.437 dB for segSNRi , and in the listening while car driving scenario, the differences were 0.387-0.835 dB for SNRi and 0.259-1.175 dB for segSNRi . In addition, the average difference between the BSL-turned-on and the BSL-turned-off cases for the binaural beamforming algorithm in the listening while car driving scenario was 1.631-4.246 dB for SNRi and 0.574-2.784 dB for segSNRi . In all testing conditions, the γ-adjusted diagonal-steering and BSL algorithm improved the values of the indices more than the conventional algorithms. The binaural beamforming algorithm, which integrates the proposed BSL and diagonal-steering algorithm, is expected to improve the performance of the binaural hearing support devices in noisy situations. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  16. A Weight-Aware Recommendation Algorithm for Mobile Multimedia Systems

    Directory of Open Access Journals (Sweden)

    Pedro M. P. Rosa

    2013-01-01

    Full Text Available In the last years, information flood is becoming a common reality, and the general user, hit by thousands of possible interesting information, has great difficulties identifying the best ones, that can guide him in his/her daily choices, like concerts, restaurants, sport gatherings, or culture events. The current growth of mobile smartphones and tablets with embedded GPS receiver, Internet access, camera, and accelerometer offer new opportunities to mobile ubiquitous multimedia applications that helps gathering the best information out of an always growing list of possibly good ones. This paper presents a mobile recommendation system for events, based on few weighted context-awareness data-fusion algorithms to combine several multimedia sources. A demonstrative deployment were utilized relevance like location data, user habits and user sharing statistics, and data-fusion algorithms like the classical CombSUM and CombMNZ, simple, and weighted. Still, the developed methodology is generic, and can be extended to other relevance, both direct (background noise volume and indirect (local temperature extrapolated by GPS coordinates in a Web service and other data-fusion techniques. To experiment, demonstrate, and evaluate the performance of different algorithms, the proposed system was created and deployed into a working mobile application providing real time awareness-based information of local events and news.

  17. Assessment of Cooperative and Heterogeneous Indoor Localization Algorithms with Real Radio Devices

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Noureddine, Hadi; Amiot, Nicolas

    2014-01-01

    In this paper we present results of real-life local- ization experiments performed in an unprecedented cooperative and heterogeneous wireless context. The experiments covered measurements of different radio devices packed together on a trolley, emulating a multi-standard Mobile Terminal (MT) along...... representative trajectories in a crowded office environment. Among all the radio access technologies involved in this campaign (including LTE, WiFi...), the focus is herein put mostly on Impulse Radio - Ultra Wideband (IR-UWB) and ZigBee sub-systems, which are enabled with peer-to-peer ranging capabilities based...... on Time of Arrival (ToA) estimation and Received Signal Strength (RSS) measurements respectively. Single-link model parameters are preliminarily drawn and discussed. In comparison with existing similar campaigns, new algorithms are also applied to the measurement data, showing the interest of advanced de...

  18. Automatic fuel lattice design in a boiling water reactor using a particle swarm optimization algorithm and local search

    International Nuclear Information System (INIS)

    Lin Chaung; Lin, Tung-Hsien

    2012-01-01

    Highlights: ► The automatic procedure was developed to design the radial enrichment and gadolinia (Gd) distribution of fuel lattice. ► The method is based on a particle swarm optimization algorithm and local search. ► The design goal were to achieve the minimum local peaking factor. ► The number of fuel pins with Gd and Gd concentration are fixed to reduce search complexity. ► In this study, three axial sections are design and lattice performance is calculated using CASMO-4. - Abstract: The axial section of fuel assembly in a boiling water reactor (BWR) consists of five or six different distributions; this requires a radial lattice design. In this study, an automatic procedure based on a particle swarm optimization (PSO) algorithm and local search was developed to design the radial enrichment and gadolinia (Gd) distribution of the fuel lattice. The design goals were to achieve the minimum local peaking factor (LPF), and to come as close as possible to the specified target average enrichment and target infinite multiplication factor (k ∞ ), in which the number of fuel pins with Gd and Gd concentration are fixed. In this study, three axial sections are designed, and lattice performance is calculated using CASMO-4. Finally, the neutron cross section library of the designed lattice is established by CMSLINK; the core status during depletion, such as thermal limits, cold shutdown margin and cycle length, are then calculated using SIMULATE-3 in order to confirm that the lattice design satisfies the design requirements.

  19. Gossip algorithms in quantum networks

    International Nuclear Information System (INIS)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up – in the best case exponentially – the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication. - Highlights: • We analyze the performance of gossip algorithms in quantum networks. • Local operations and classical communication (LOCC) can speed the performance up. • The speed-up is exponential in the best case; the number of LOCC is polynomial.

  20. Gossip algorithms in quantum networks

    Energy Technology Data Exchange (ETDEWEB)

    Siomau, Michael, E-mail: siomau@nld.ds.mpg.de [Physics Department, Jazan University, P.O. Box 114, 45142 Jazan (Saudi Arabia); Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany)

    2017-01-23

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up – in the best case exponentially – the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication. - Highlights: • We analyze the performance of gossip algorithms in quantum networks. • Local operations and classical communication (LOCC) can speed the performance up. • The speed-up is exponential in the best case; the number of LOCC is polynomial.

  1. Lorsque la recherche locale est source de changements véritables ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    18 févr. 2011 ... Les think tanks africains se penchent sur certaines des difficultés les plus ... Lorsque la recherche locale est source de changements véritables en Afrique ... De quel type d'information les acteurs de la sphère politique ont-ils ...

  2. Incorporation of local dependent reliability information into the Prior Image Constrained Compressed Sensing (PICCS) reconstruction algorithm

    International Nuclear Information System (INIS)

    Vaegler, Sven; Sauer, Otto; Stsepankou, Dzmitry; Hesser, Juergen

    2015-01-01

    The reduction of dose in cone beam computer tomography (CBCT) arises from the decrease of the tube current for each projection as well as from the reduction of the number of projections. In order to maintain good image quality, sophisticated image reconstruction techniques are required. The Prior Image Constrained Compressed Sensing (PICCS) incorporates prior images into the reconstruction algorithm and outperforms the widespread used Feldkamp-Davis-Kress-algorithm (FDK) when the number of projections is reduced. However, prior images that contain major variations are not appropriately considered so far in PICCS. We therefore propose the partial-PICCS (pPICCS) algorithm. This framework is a problem-specific extension of PICCS and enables the incorporation of the reliability of the prior images additionally. We assumed that the prior images are composed of areas with large and small deviations. Accordingly, a weighting matrix considered the assigned areas in the objective function. We applied our algorithm to the problem of image reconstruction from few views by simulations with a computer phantom as well as on clinical CBCT projections from a head-and-neck case. All prior images contained large local variations. The reconstructed images were compared to the reconstruction results by the FDK-algorithm, by Compressed Sensing (CS) and by PICCS. To show the gain of image quality we compared image details with the reference image and used quantitative metrics (root-mean-square error (RMSE), contrast-to-noise-ratio (CNR)). The pPICCS reconstruction framework yield images with substantially improved quality even when the number of projections was very small. The images contained less streaking, blurring and inaccurately reconstructed structures compared to the images reconstructed by FDK, CS and conventional PICCS. The increased image quality is also reflected in large RMSE differences. We proposed a modification of the original PICCS algorithm. The pPICCS algorithm

  3. Iterative Observer-based Estimation Algorithms for Steady-State Elliptic Partial Differential Equation Systems

    KAUST Repository

    Majeed, Muhammad Usman

    2017-07-19

    Steady-state elliptic partial differential equations (PDEs) are frequently used to model a diverse range of physical phenomena. The source and boundary data estimation problems for such PDE systems are of prime interest in various engineering disciplines including biomedical engineering, mechanics of materials and earth sciences. Almost all existing solution strategies for such problems can be broadly classified as optimization-based techniques, which are computationally heavy especially when the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time-like. In this regard, first, an iterative observer algorithm is developed that sweeps over regular-shaped domains and solves boundary estimation problems for steady-state Laplace equation. It is well-known that source and boundary estimation problems for the elliptic PDEs are highly sensitive to noise in the data. For this, an optimal iterative observer algorithm, which is a robust counterpart of the iterative observer, is presented to tackle the ill-posedness due to noise. The iterative observer algorithm and the optimal iterative algorithm are then used to solve source localization and estimation problems for Poisson equation for noise-free and noisy data cases respectively. Next, a divide and conquer approach is developed for three-dimensional domains with two congruent parallel surfaces to solve the boundary and the source data estimation problems for the steady-state Laplace and Poisson kind of systems respectively. Theoretical results are shown using a functional analysis framework, and consistent numerical simulation results are presented for several test cases using finite difference discretization schemes.

  4. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array

    Directory of Open Access Journals (Sweden)

    Yankui Zhang

    2018-05-01

    Full Text Available Direct position determination (DPD is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer–Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  5. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.

    Science.gov (United States)

    Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun

    2018-05-08

    Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  6. Impact of local and non-local sources of pollution on background US Ozone: synergy of a low-earth orbiting and geostationary sounder constellation

    Science.gov (United States)

    Bowman, K. W.; Lee, M.

    2015-12-01

    Dramatic changes in the global distribution of emissions over the last decade have fundamentally altered source-receptor pollution impacts. A new generation of low-earth orbiting (LEO) sounders complimented by geostationary sounders over North America, Europe, and Asia providing a unique opportunity to quantify the current and future trajectory of emissions and their impact on global pollution. We examine the potential of this constellation of air quality sounders to quantify the role of local and non-local sources of pollution on background ozone in the US. Based upon an adjoint sensitivity method, we quantify the role synoptic scale transport of non-US pollution on US background ozone over months representative of different source-receptor relationships. This analysis allows us distinguish emission trajectories from megacities, e.g. Beijing, or regions, e.g., western China, from natural trends on downwind ozone. We subsequently explore how a combination of LEO and GEO observations could help quantify the balance of local emissions against changes in distant sources . These results show how this unprecedented new international ozone observing system can monitor the changing structure of emissions and their impact on global pollution.

  7. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  8. Automatic block-matching registration to improve lung tumor localization during image-guided radiotherapy

    Science.gov (United States)

    Robertson, Scott Patrick

    To improve relatively poor outcomes for locally-advanced lung cancer patients, many current efforts are dedicated to minimizing uncertainties in radiotherapy. This enables the isotoxic delivery of escalated tumor doses, leading to better local tumor control. The current dissertation specifically addresses inter-fractional uncertainties resulting from patient setup variability. An automatic block-matching registration (BMR) algorithm is implemented and evaluated for the purpose of directly localizing advanced-stage lung tumors during image-guided radiation therapy. In this algorithm, small image sub-volumes, termed "blocks", are automatically identified on the tumor surface in an initial planning computed tomography (CT) image. Each block is independently and automatically registered to daily images acquired immediately prior to each treatment fraction. To improve the accuracy and robustness of BMR, this algorithm incorporates multi-resolution pyramid registration, regularization with a median filter, and a new multiple-candidate-registrations technique. The result of block-matching is a sparse displacement vector field that models local tissue deformations near the tumor surface. The distribution of displacement vectors is aggregated to obtain the final tumor registration, corresponding to the treatment couch shift for patient setup correction. Compared to existing rigid and deformable registration algorithms, the final BMR algorithm significantly improves the overlap between target volumes from the planning CT and registered daily images. Furthermore, BMR results in the smallest treatment margins for the given study population. However, despite these improvements, large residual target localization errors were noted, indicating that purely rigid couch shifts cannot correct for all sources of inter-fractional variability. Further reductions in treatment uncertainties may require the combination of high-quality target localization and adaptive radiotherapy.

  9. Unconventional Algorithms: Complementarity of Axiomatics and Construction

    Directory of Open Access Journals (Sweden)

    Gordana Dodig Crnkovic

    2012-10-01

    Full Text Available In this paper, we analyze axiomatic and constructive issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms and unconventional computations change the algorithmic universe, making it open and allowing increased flexibility and expressive power that augment creativity. At the same time, the greater power of new types of algorithms also results in the greater complexity of the algorithmic universe, transforming it into the algorithmic multiverse and demanding new tools for its study. That is why we analyze new powerful tools brought forth by local mathematics, local logics, logical varieties and the axiomatic theory of algorithms, automata and computation. We demonstrate how these new tools allow efficient navigation in the algorithmic multiverse. Further work includes study of natural computation by unconventional algorithms and constructive approaches.

  10. A distributed multi-agent linear biobjective algorithm for energy flow optimization in microgrids

    DEFF Research Database (Denmark)

    Brehm, Robert; Top, Søren; Mátéfi-Tempfli, Stefan

    2016-01-01

    consisting of local energy resources and storage capacities is presented which is based on the auction algorithm for assignment problems originally introduced by Bertsekas in 1979 [1]. It is shown that the topology of a microgrid can be represented as a bipartite graph and mathematically be described...... as a classical transportation problem. This allows applying an auction algorithm scheme in a distributed way where each energy supply system node is either a source or a sink and is represented by an individual acting agent. The single-objective approach is extended towards bi-objectivity to build a framework...

  11. Human impact on fluvial sediments: distinguishing regional and local sources of heavy metals contamination

    Science.gov (United States)

    Novakova, T.; Matys Grygar, T.; Bábek, O.; Faměra, M.; Mihaljevič, M.; Strnad, L.

    2012-04-01

    Industrial pollution can provide a useful tool to study spatiotemporal distribution of modern floodplain sediments, trace their provenance, and allow their dating. Regional contamination of southern Moravia (the south-eastern part of the Czech Republic) by heavy metals during the 20th century was determined in fluvial sediments of the Morava River by means of enrichment factors. The influence of local sources and sampling sites heterogeneity were studied in overbank fines with different lithology and facies. For this purpose, samples were obtained from hand-drilled cores from regulated channel banks, with well-defined local sources of contamination (factories in Zlín and Otrokovice) and also from near naturally inundated floodplains in two nature protected areas (at 30 km distance). The analyses were performed by X-ray fluorescence spectroscopy (ED XRF), ICP MS (EDXRF samples calibration, 206Pb/207Pb ratio), magnetic susceptibility, cation exchange capacity (CEC), and 137Cs and 210Pb activities. Enrichment factors (EF) of heavy metals (Pb, Zn, Cu and Cr) and magnetic susceptibility of overbank fines in near-naturally (near annually) inundated areas allowed us to reconstruct historical contamination by heavy metals in the entire study area independently on lithofacies. Measured lithological background values were then used for calculation of EFs in the channel sediments and in floodplain sediments deposited within narrow part of a former floodplain which is now reduced to about one quarter of its original width by flood defences. Sediments from regulated channel banks were found stratigraphically and lithologically "erratic", unreliable for quantification of regional contamination due to a high variability of sedimentary environment. On the other hand, these sediments are very sensitive to the nearby local sources of heavy metals. For a practical work one must first choose whether large scale, i.e. a really averaged regional contamination should be reconstructed

  12. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    Science.gov (United States)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  13. Digital closed orbit feedback system for the Advanced Photon Source storage ring

    International Nuclear Information System (INIS)

    Chung, Y.; Barr, D.; Decker, G.; Galayda, J.; Lenkszus, F.; Lumpkin, A.; Votaw, A.J.

    1995-01-01

    Closed orbit feedback for the Advanced Photon Source (APS) storage ring employs unified global an local feedback systems for stabilization of particle and photon beams based on digital signal processing (DSP). Hardware and software aspects of the system will be described. In particular, we will discuss global and local orbit feedback algorithms, PID (proportional, integral, and derivative) control algorithm. application of digital signal processing to compensate for vacuum chamber eddy current effects, resolution of the interaction between global and local systems through decoupling, self-correction of the local bump closure error, user interface through the APS control system, and system performance in the frequency and time domains. The system hardware, including the DSPS, is distributed in 20 VNE crates around the ring, and the entire feedback system runs synchronously at 4-kHz sampling frequency in order to achieve a correction bandwidth exceeding 100 Hz. The required data sharing between the global and local feedback systems is facilitated via the use of fiber-optically-networked reflective memories

  14. Automatic localization of the left ventricular blood pool centroid in short axis cardiac cine MR images.

    Science.gov (United States)

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; Abdul Aziz, Yang Faridah; Chee, Kok Han; McLaughlin, Robert A

    2018-06-01

    In this paper, we develop and validate an open source, fully automatic algorithm to localize the left ventricular (LV) blood pool centroid in short axis cardiac cine MR images, enabling follow-on automated LV segmentation algorithms. The algorithm comprises four steps: (i) quantify motion to determine an initial region of interest surrounding the heart, (ii) identify potential 2D objects of interest using an intensity-based segmentation, (iii) assess contraction/expansion, circularity, and proximity to lung tissue to score all objects of interest in terms of their likelihood of constituting part of the LV, and (iv) aggregate the objects into connected groups and construct the final LV blood pool volume and centroid. This algorithm was tested against 1140 datasets from the Kaggle Second Annual Data Science Bowl, as well as 45 datasets from the STACOM 2009 Cardiac MR Left Ventricle Segmentation Challenge. Correct LV localization was confirmed in 97.3% of the datasets. The mean absolute error between the gold standard and localization centroids was 2.8 to 4.7 mm, or 12 to 22% of the average endocardial radius. Graphical abstract Fully automated localization of the left ventricular blood pool in short axis cardiac cine MR images.

  15. Independent EEG sources are dipolar.

    Directory of Open Access Journals (Sweden)

    Arnaud Delorme

    Full Text Available Independent component analysis (ICA and blind source separation (BSS methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR effected by each decomposition, and decomposition 'dipolarity' defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA; best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison.

  16. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    Science.gov (United States)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the

  17. Study of short time effect on health of a local air pollution source. Epidemiological approach; Etude des effets a court terme sur la sante d'une source locale de pollution atmospherique. Approche epidemiologique

    Energy Technology Data Exchange (ETDEWEB)

    Guzzo, J.Ch. [Institut National de Veille Sanitaire, Reseau National de Sante Publique, 94 - Saint-Maurice (France)

    2000-07-01

    This document applies to health professionals who are facing with a problem of risks evaluation relative to a local source of air pollution and envisage to realize an epidemiological study. In this document, only the short term effects are considered and the situations of accidental pollution are not treated. Without being a methodological treatise it can be a tool to better understand the constraints and the limits of epidemiology to answer the difficult question of the impact evaluation on health of populations living near a local source of air pollution. (N.C.)

  18. Performance analysis of the partial use of a local optimization operator on the genetic algorithm for the Travelling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Milan Djordjevic

    2012-01-01

    Full Text Available Background: The Travelling Salesman Problem is an NP-hard problem in combinatorial optimization with a number of practical implications. There are many heuristic algorithms and exact methods for solving the problem. Objectives: In this paper we study the influence of hybridization of a genetic algorithm with a local optimizer on solving instances of the Travelling Salesman Problem. Methods/ Approach: Our algorithm uses hybridization that occurs at various percentages of generations of a genetic algorithm. Moreover, we have also studied at which generations to apply the hybridization and hence applied it at random generations, at the initial generations, and at the last ones. Results: We tested our algorithm on instances with sizes ranging from 76 to 439 cities. On the one hand, the less frequent application of hybridization decreased the average running time of the algorithm from 14.62 sec to 2.78 sec at 100% and 10% hybridization respectively, while on the other hand, the quality of the solution on average deteriorated only from 0.21% till 1.40% worse than the optimal solution. Conclusions: In the paper we have shown that even a small hybridization substantially improves the quality of the result. Moreover, the hybridization in fact does not deteriorate the running time too much. Finally, our experiments show that the best results are obtained when hybridization occurs in the last generations of the genetic algorithm.

  19. Locating the source of diffusion in complex networks by time-reversal backward spreading

    Science.gov (United States)

    Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H. Eugene

    2016-03-01

    Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.

  20. Chandra Source Catalog: Background Determination and Source Detection

    Science.gov (United States)

    McCollough, Michael L.; Rots, A. H.; Primini, F. A.; Evans, I. N.; Glotfelty, K. J.; Hain, R.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory will used to generate the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).

  1. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  2. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    Science.gov (United States)

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  3. An Adaptive Tuning Mechanism for Phase-Locked Loop Algorithms for Faster Time Performance of Interconnected Renewable Energy Sources

    DEFF Research Database (Denmark)

    Hadjidemetriou, Lenos; Kyriakides, Elias; Blaabjerg, Frede

    2015-01-01

    Interconnected renewable energy sources (RES) require fast and accurate fault ride through (FRT) operation, in order to support the power grid, when faults occur. This paper proposes an adaptive phase-locked loop (adaptive dαβPLL) algorithm, which can be used for a faster and more accurate response...

  4. Local Lyapunov exponents for dissipative continuous systems

    International Nuclear Information System (INIS)

    Grond, Florian; Diebner, Hans H.

    2005-01-01

    We analyze a recently proposed algorithm for computing Lyapunov exponents focusing on its capability to calculate reliable local values for chaotic attractors. The averaging process of local contributions to the global measure becomes interpretable, i.e. they are related to the local topological structure in phase space. We compare the algorithm with the commonly used Wolf algorithm by means of analyzing correlations between coordinates of the chaotic attractor and local values of the Lyapunov exponents. The correlations for the new algorithm turn out to be significantly stronger than those for the Wolf algorithm. Since the usage of scalar measures to capture complex structures can be questioned we discuss these entities along with a more phenomenological description of scatter plots

  5. Performance Analysis of Multi-Dimensional ESPRIT-Type Algorithms for Arbitrary and Strictly Non-Circular Sources With Spatial Smoothing

    Science.gov (United States)

    Steinwandt, Jens; Roemer, Florian; Haardt, Martin; Galdo, Giovanni Del

    2017-05-01

    Spatial smoothing is a widely used preprocessing scheme to improve the performance of high-resolution parameter estimation algorithms in case of coherent signals or if only a small number of snapshots is available. In this paper, we present a first-order performance analysis of the spatially smoothed versions of R-D Standard ESPRIT and R-D Unitary ESPRIT for sources with arbitrary signal constellations as well as R-D NC Standard ESPRIT and R-D NC Unitary ESPRIT for strictly second-order (SO) non-circular (NC) sources. The derived expressions are asymptotic in the effective signal-to-noise ratio (SNR), i.e., the approximations become exact for either high SNRs or a large sample size. Moreover, no assumptions on the noise statistics are required apart from a zero-mean and finite SO moments. We show that both R-D NC ESPRIT-type algorithms with spatial smoothing perform asymptotically identical in the high effective SNR regime. Generally, the performance of spatial smoothing based algorithms depends on the number of subarrays, which is a design parameter and needs to be chosen beforehand. In order to gain more insights into the optimal choice of the number of subarrays, we simplify the derived analytical R-D mean square error (MSE) expressions for the special case of a single source. The obtained MSE expression explicitly depends on the number of subarrays in each dimension, which allows us to analytically find the optimal number of subarrays for spatial smoothing. Based on this result, we additionally derive the maximum asymptotic gain from spatial smoothing and explicitly compute the asymptotic efficiency for this special case. All the analytical results are verified by simulations.

  6. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  7. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    Science.gov (United States)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  8. Analysis of filtration properties of locally sourced base oil for the ...

    African Journals Online (AJOL)

    This study examines the use of locally sourced oil like, groundnut oil, melon oil, vegetable oil, soya oil and palm oil as substitute for diesel oil in formulating oil base drilling fluids relative to filtration properties. The filtrate volumes of each of the oils were obtained for filtration control analysis. With increasing potash and ...

  9. Indoor footstep localization from structural dynamics instrumentation

    Science.gov (United States)

    Poston, Jeffrey D.; Buehrer, R. Michael; Tarazaga, Pablo A.

    2017-05-01

    Measurements from accelerometers originally deployed to measure a building's structural dynamics can serve a new role: locating individuals moving within a building. Specifically, this paper proposes measurements of footstep-generated vibrations as a novel source of information for localization. The complexity of wave propagation in a building (e.g., dispersion and reflection) limits the utility of existing algorithms designed to locate, for example, the source of sound in a room or radio waves in free space. This paper develops enhancements for arrival time determination and time difference of arrival localization in order to address the complexities posed by wave propagation within a building's structure. Experiments with actual measurements from an instrumented public building demonstrate the potential of locating footsteps to sub-meter accuracy. Furthermore, this paper explains how to forecast performance in other buildings with different sensor configurations. This localization capability holds the potential to assist public safety agencies in building evacuation and incidence response, to facilitate occupancy-based optimization of heating or cooling and to inform facility security.

  10. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    Science.gov (United States)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  11. Localization of accessory pathway in patients with wolff-parkinson-white syndrome from surface ecg using arruda algorithm

    International Nuclear Information System (INIS)

    Saidullah, S.; Shah, B.

    2016-01-01

    Background: To ablate accessory pathway successfully and conveniently, accurate localization of the pathway is needed. Electrophysiologists use different algorithms before taking the patients to the electrophysiology (EP) laboratory to plan the intervention accordingly. In this study, we used Arruda algorithm to locate the accessory pathway. The objective of the study was to determine the accuracy of the Arruda algorithm for locating the pathway on surface ECG. Methods: It was a cross-sectional observational study conducted from January 2014 to January 2016 in the electrophysiology department of Hayat Abad Medical Complex Peshawar Pakistan. A total of fifty nine (n=59) consecutive patients of both genders between age 14-60 years presented with WPW syndrome (Symptomatic tachycardia with delta wave on surface ECG) were included in the study. Patient's electrocardiogram (ECG) before taking patients to laboratory was analysed on Arruda algorithm. Standard four wires protocol was used for EP study before ablation. Once the findings were confirmed the pathway was ablated as per standard guidelines. Results: A total of fifty nine (n=59) patients between the age 14-60 years were included in the study. Cumulative mean age was 31.5 years ± 12.5 SD. There were 56.4% (n=31) males with mean age 28.2 years ± 10.2 SD and 43.6% (n=24) were females with mean age 35.9 years ± 14.0 SD. Arruda algorithm was found to be accurate in predicting the exact accessory pathway (AP) in 83.6% (n=46) cases. Among all inaccurate predictions (n=9), Arruda inaccurately predicted two third (n=6; 66.7%) pathways towards right side (right posteroseptal, right posterolateral and right antrolateral). Conclusion: Arruda algorithm was found highly accurate in predicting accessory pathway before ablation. (author)

  12. Computational Discovery of Materials Using the Firefly Algorithm

    Science.gov (United States)

    Avendaño-Franco, Guillermo; Romero, Aldo

    Our current ability to model physical phenomena accurately, the increase computational power and better algorithms are the driving forces behind the computational discovery and design of novel materials, allowing for virtual characterization before their realization in the laboratory. We present the implementation of a novel firefly algorithm, a population-based algorithm for global optimization for searching the structure/composition space. This novel computation-intensive approach naturally take advantage of concurrency, targeted exploration and still keeping enough diversity. We apply the new method in both periodic and non-periodic structures and we present the implementation challenges and solutions to improve efficiency. The implementation makes use of computational materials databases and network analysis to optimize the search and get insights about the geometric structure of local minima on the energy landscape. The method has been implemented in our software PyChemia, an open-source package for materials discovery. We acknowledge the support of DMREF-NSF 1434897 and the Donors of the American Chemical Society Petroleum Research Fund for partial support of this research under Contract 54075-ND10.

  13. Underwater tracking of a moving dipole source using an artificial lateral line: algorithm and experimental validation with ionic polymer–metal composite flow sensors

    International Nuclear Information System (INIS)

    Abdulsadda, Ahmad T; Tan, Xiaobo

    2013-01-01

    Motivated by the lateral line system of fish, arrays of flow sensors have been proposed as a new sensing modality for underwater robots. Existing studies on such artificial lateral lines (ALLs) have been mostly focused on the localization of a fixed underwater vibrating sphere (dipole source). In this paper we examine the problem of tracking a moving dipole source using an ALL system. Based on an analytical model for the moving dipole-generated flow field, we formulate a nonlinear estimation problem that aims to minimize the error between the measured and model-predicted magnitudes of flow velocities at the sensor sites, which is subsequently solved with the Gauss–Newton scheme. A sliding discrete Fourier transform (SDFT) algorithm is proposed to efficiently compute the evolving signal magnitudes based on the flow velocity measurements. Simulation indicates that it is adequate and more computationally efficient to use only the signal magnitudes corresponding to the dipole vibration frequency. Finally, experiments conducted with an artificial lateral line consisting of six ionic polymer–metal composite (IPMC) flow sensors demonstrate that the proposed scheme is able to simultaneously locate the moving dipole and estimate its vibration amplitude and traveling speed with small errors. (paper)

  14. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  15. Theory and Algorithms for Global/Local Design Optimization

    National Research Council Canada - National Science Library

    Haftka, Raphael T

    2004-01-01

    ... the component and overall design as well as on exploration of global optimization algorithms. In the former category, heuristic decomposition was followed with proof that it solves the original problem...

  16. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  17. An adaptive Phase-Locked Loop algorithm for faster fault ride through performance of interconnected renewable energy sources

    DEFF Research Database (Denmark)

    Hadjidemetriou, Lenos; Kyriakides, Elias; Blaabjerg, Frede

    2013-01-01

    Interconnected renewable energy sources require fast and accurate fault ride through operation in order to support the power grid when faults occur. This paper proposes an adaptive Phase-Locked Loop (adaptive dαβPLL) algorithm, which can be used for a faster and more accurate response of the grid...... side converter control of a renewable energy source, especially under fault ride through operation. The adaptive dαβPLL is based on modifying the control parameters of the dαβPLL according to the type and voltage characteristic of the grid fault with the purpose of accelerating the performance...

  18. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local universe

    DEFF Research Database (Denmark)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-01-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe....... Assuming that the distribution of the neutrino sources follows that of matter we look for correlations between `warm' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance...... (including that of IceCube-Gen2) we demonstrate that sources with local density exceeding $10^{-6} \\, \\text{Mpc}^{-3}$ and neutrino luminosity $L_{\

  19. A New Curve Tracing Algorithm Based on Local Feature in the Vectorization of Paper Seismograms

    Directory of Open Access Journals (Sweden)

    Maofa Wang

    2014-02-01

    Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction. The vectorization of paper seismograms is an import problem to be resolved. Auto tracing of waveform curves is a key technology for the vectorization of paper seismograms. It can transform an original scanning image into digital waveform data. Accurately tracing out all the key points of each curve in seismograms is the foundation for vectorization of paper seismograms. In the paper, we present a new curve tracing algorithm based on local feature, applying to auto extraction of earthquake waveform in paper seismograms.

  20. Linearized versus non-linear inverse methods for seismic localization of underground sources

    DEFF Research Database (Denmark)

    Oh, Geok Lian; Jacobsen, Finn

    2013-01-01

    The problem of localization of underground sources from seismic measurements detected by several geophones located on the ground surface is addressed. Two main approaches to the solution of the problem are considered: a beamforming approach that is derived from the linearized inversion problem, a...

  1. Accoustic Localization of Breakdown in Radio Frequency Accelerating Cavities

    Energy Technology Data Exchange (ETDEWEB)

    Lane, Peter Gwin [IIT, Chicago

    2016-07-01

    Current designs for muon accelerators require high-gradient radio frequency (RF) cavities to be placed in solenoidal magnetic fields. These fields help contain and efficiently reduce the phase space volume of source muons in order to create a usable muon beam for collider and neutrino experiments. In this context and in general, the use of RF cavities in strong magnetic fields has its challenges. It has been found that placing normal conducting RF cavities in strong magnetic fields reduces the threshold at which RF cavity breakdown occurs. To aid the effort to study RF cavity breakdown in magnetic fields, it would be helpful to have a diagnostic tool which can localize the source of breakdown sparks inside the cavity. These sparks generate thermal shocks to small regions of the inner cavity wall that can be detected and localized using microphones attached to the outer cavity surface. Details on RF cavity sound sources as well as the hardware, software, and algorithms used to localize the source of sound emitted from breakdown thermal shocks are presented. In addition, results from simulations and experiments on three RF cavities, namely the Aluminum Mock Cavity, the High-Pressure Cavity, and the Modular Cavity, are also given. These results demonstrate the validity and effectiveness of the described technique for acoustic localization of breakdown.

  2. Examining effective use of data sources and modeling algorithms for improving biomass estimation in a moist tropical forest of the Brazilian Amazon

    Science.gov (United States)

    Yunyun Feng; Dengsheng Lu; Qi Chen; Michael Keller; Emilio Moran; Maiza Nara dos-Santos; Edson Luis Bolfe; Mateus Batistella

    2017-01-01

    Previous research has explored the potential to integrate lidar and optical data in aboveground biomass (AGB) estimation, but how different data sources, vegetation types, and modeling algorithms influence AGB estimation is poorly understood. This research conducts a comparative analysis of different data sources and modeling approaches in improving AGB estimation....

  3. Optimum design for rotor-bearing system using advanced generic algorithm

    International Nuclear Information System (INIS)

    Kim, Young Chan; Choi, Seong Pil; Yang, Bo Suk

    2001-01-01

    This paper describes a combinational method to compute the global and local solutions of optimization problems. The present hybrid algorithm uses both a generic algorithm and a local concentrate search algorithm (e.g simplex method). The hybrid algorithm is not only faster than the standard genetic algorithm but also supplies a more accurate solution. In addition, this algorithm can find the global and local optimum solutions. The present algorithm can be supplied to minimize the resonance response (Q factor) and to yield the critical speeds as far from the operating speed as possible. These factors play very important roles in designing a rotor-bearing system under the dynamic behavior constraint. In the present work, the shaft diameter, the bearing length, and clearance are used as the design variables

  4. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  5. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  6. Protein Sub-Nuclear Localization Based on Effective Fusion Representations and Dimension Reduction Algorithm LDA.

    Science.gov (United States)

    Wang, Shunfang; Liu, Shuhui

    2015-12-19

    An effective representation of a protein sequence plays a crucial role in protein sub-nuclear localization. The existing representations, such as dipeptide composition (DipC), pseudo-amino acid composition (PseAAC) and position specific scoring matrix (PSSM), are insufficient to represent protein sequence due to their single perspectives. Thus, this paper proposes two fusion feature representations of DipPSSM and PseAAPSSM to integrate PSSM with DipC and PseAAC, respectively. When constructing each fusion representation, we introduce the balance factors to value the importance of its components. The optimal values of the balance factors are sought by genetic algorithm. Due to the high dimensionality of the proposed representations, linear discriminant analysis (LDA) is used to find its important low dimensional structure, which is essential for classification and location prediction. The numerical experiments on two public datasets with KNN classifier and cross-validation tests showed that in terms of the common indexes of sensitivity, specificity, accuracy and MCC, the proposed fusing representations outperform the traditional representations in protein sub-nuclear localization, and the representation treated by LDA outperforms the untreated one.

  7. Sound Source Localization Through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network

    Directory of Open Access Journals (Sweden)

    Christoph Beck

    2016-10-01

    Full Text Available Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  8. Micro-seismic waveform matching inversion based on gravitational search algorithm and parallel computation

    Science.gov (United States)

    Jiang, Y.; Xing, H. L.

    2016-12-01

    Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation

  9. Memetic Algorithm and its Application to the Arrangement of Exam Timetable

    Directory of Open Access Journals (Sweden)

    Wenhua Huang

    2016-06-01

    Full Text Available This paper looks at Memetic Algorithm for solving timetabling problems. We present a new memetic algorithm which consists of global search algorithm and local search algorithm. In the proposed method, a genetic algorithm is chosen for global search algorithm while a simulated annealing algorithm is used for local search algorithm. In particular, we could get an optimal solution through the .NET with the real data of JiangXi Normal University. Experimental results show that the proposed algorithm can solve the university exam timetabling problem efficiently.

  10. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  11. Exploiting Deep Neural Networks and Head Movements for Robust Binaural Localization of Multiple Sources in Reverberant Environments

    DEFF Research Database (Denmark)

    Ma, Ning; May, Tobias; Brown, Guy J.

    2017-01-01

    This paper presents a novel machine-hearing system that exploits deep neural networks (DNNs) and head movements for robust binaural localization of multiple sources in reverberant environments. DNNs are used to learn the relationship between the source azimuth and binaural cues, consisting...... of the complete cross-correlation function (CCF) and interaural level differences (ILDs). In contrast to many previous binaural hearing systems, the proposed approach is not restricted to localization of sound sources in the frontal hemifield. Due to the similarity of binaural cues in the frontal and rear...

  12. Algorithmic requirements for swarm intelligence in differently coupled collective systems

    International Nuclear Information System (INIS)

    Stradner, Jürgen; Thenius, Ronald; Zahadat, Payam; Hamann, Heiko; Crailsheim, Karl; Schmickl, Thomas

    2013-01-01

    Swarm systems are based on intermediate connectivity between individuals and dynamic neighborhoods. In natural swarms self-organizing principles bring their agents to that favorable level of connectivity. They serve as interesting sources of inspiration for control algorithms in swarm robotics on the one hand, and in modular robotics on the other hand. In this paper we demonstrate and compare a set of bio-inspired algorithms that are used to control the collective behavior of swarms and modular systems: BEECLUST, AHHS (hormone controllers), FGRN (fractal genetic regulatory networks), and VE (virtual embryogenesis). We demonstrate how such bio-inspired control paradigms bring their host systems to a level of intermediate connectivity, what delivers sufficient robustness to these systems for collective decentralized control. In parallel, these algorithms allow sufficient volatility of shared information within these systems to help preventing local optima and deadlock situations, this way keeping those systems flexible and adaptive in dynamic non-deterministic environments

  13. The Chandra Source Catalog: Background Determination and Source Detection

    Science.gov (United States)

    McCollough, Michael; Rots, Arnold; Primini, Francis A.; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Danny G. Gibbs, II; Grier, John D.; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory are used to generate one of the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).

  14. Cryogenic technology review of cold neutron source facility for localization

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hun Cheol; Park, D. S.; Moon, H. M.; Soon, Y. P. [Daesung Cryogenic Research Institute, Ansan (Korea); Kim, J. H. [United Pacific Technology, Inc., Ansan (Korea)

    1998-02-01

    This Research is performed to localize the cold neutron source(CNS) facility in HANARO and the report consists of two parts. In PART I, the local and foreign technology for CNS facility is investigated and examined. In PART II, safety and licensing are investigated. CNS facility consists of cryogenic and warm part. Cryogenic part includes a helium refrigerator, vacuum insulated pipes, condenser, cryogenic fluid tube and moderator cell. Warm part includes moderator gas control, vacuum equipment, process monitoring system. Warm part is at high level as a result of the development of semiconductor industries and can be localized. However, even though cryogenic technology is expected to play a important role in developing the 21st century's cutting technology, it lacks of specialists and the research facility since the domestic market is small and the research institutes and government do not recognize the importance. Therefore, it takes a long research time in order to localize the facility. The safety standard of reactor for hydrogen gas in domestic nuclear power regulations is compared with that of the foreign countries, and the licensing method for installation of CNS facility is examined. The system failure and its influence are also analyzed. 23 refs., 59 figs., 26 tabs. (Author)

  15. Particle swarm genetic algorithm and its application

    International Nuclear Information System (INIS)

    Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai

    2012-01-01

    To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)

  16. A Novel Chaotic Particle Swarm Optimization Algorithm for Parking Space Guidance

    Directory of Open Access Journals (Sweden)

    Na Dong

    2016-01-01

    Full Text Available An evolutionary approach of parking space guidance based upon a novel Chaotic Particle Swarm Optimization (CPSO algorithm is proposed. In the newly proposed CPSO algorithm, the chaotic dynamics is combined into the position updating rules of Particle Swarm Optimization to improve the diversity of solutions and to avoid being trapped in the local optima. This novel approach, that combines the strengths of Particle Swarm Optimization and chaotic dynamics, is then applied into the route optimization (RO problem of parking lots, which is an important issue in the management systems of large-scale parking lots. It is used to find out the optimized paths between any source and destination nodes in the route network. Route optimization problems based on real parking lots are introduced for analyzing and the effectiveness and practicability of this novel optimization algorithm for parking space guidance have been verified through the application results.

  17. Review on solving the inverse problem in EEG source analysis

    Directory of Open Access Journals (Sweden)

    Fabri Simon G

    2008-11-01

    Full Text Available Abstract In this primer, we give a review of the inverse problem for EEG source localization. This is intended for the researchers new in the field to get insight in the state-of-the-art techniques used to find approximate solutions of the brain sources giving rise to a scalp potential recording. Furthermore, a review of the performance results of the different techniques is provided to compare these different inverse solutions. The authors also include the results of a Monte-Carlo analysis which they performed to compare four non parametric algorithms and hence contribute to what is presently recorded in the literature. An extensive list of references to the work of other researchers is also provided. This paper starts off with a mathematical description of the inverse problem and proceeds to discuss the two main categories of methods which were developed to solve the EEG inverse problem, mainly the non parametric and parametric methods. The main difference between the two is to whether a fixed number of dipoles is assumed a priori or not. Various techniques falling within these categories are described including minimum norm estimates and their generalizations, LORETA, sLORETA, VARETA, S-MAP, ST-MAP, Backus-Gilbert, LAURA, Shrinking LORETA FOCUSS (SLF, SSLOFO and ALF for non parametric methods and beamforming techniques, BESA, subspace techniques such as MUSIC and methods derived from it, FINES, simulated annealing and computational intelligence algorithms for parametric methods. From a review of the performance of these techniques as documented in the literature, one could conclude that in most cases the LORETA solution gives satisfactory results. In situations involving clusters of dipoles, higher resolution algorithms such as MUSIC or FINES are however preferred. Imposing reliable biophysical and psychological constraints, as done by LAURA has given superior results. The Monte-Carlo analysis performed, comparing WMN, LORETA, sLORETA and SLF

  18. Automatic control algorithm effects on energy production

    Science.gov (United States)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  19. Evolution of Sound Source Localization Circuits in the Nonmammalian Vertebrate Brainstem

    DEFF Research Database (Denmark)

    Walton, Peggy L; Christensen-Dalsgaard, Jakob; Carr, Catherine E

    2017-01-01

    The earliest vertebrate ears likely subserved a gravistatic function for orientation in the aquatic environment. However, in addition to detecting acceleration created by the animal's own movements, the otolithic end organs that detect linear acceleration would have responded to particle movement...... to increased sensitivity to a broader frequency range and to modification of the preexisting circuitry for sound source localization....

  20. Bad Clade Deletion Supertrees: A Fast and Accurate Supertree Algorithm.

    Science.gov (United States)

    Fleischauer, Markus; Böcker, Sebastian

    2017-09-01

    Supertree methods merge a set of overlapping phylogenetic trees into a supertree containing all taxa of the input trees. The challenge in supertree reconstruction is the way of dealing with conflicting information in the input trees. Many different algorithms for different objective functions have been suggested to resolve these conflicts. In particular, there exist methods based on encoding the source trees in a matrix, where the supertree is constructed applying a local search heuristic to optimize the respective objective function. We present a novel heuristic supertree algorithm called Bad Clade Deletion (BCD) supertrees. It uses minimum cuts to delete a locally minimal number of columns from such a matrix representation so that it is compatible. This is the complement problem to Matrix Representation with Compatibility (Maximum Split Fit). Our algorithm has guaranteed polynomial worst-case running time and performs swiftly in practice. Different from local search heuristics, it guarantees to return the directed perfect phylogeny for the input matrix, corresponding to the parent tree of the input trees, if one exists. Comparing supertrees to model trees for simulated data, BCD shows a better accuracy (F1 score) than the state-of-the-art algorithms SuperFine (up to 3%) and Matrix Representation with Parsimony (up to 7%); at the same time, BCD is up to 7 times faster than SuperFine, and up to 600 times faster than Matrix Representation with Parsimony. Finally, using the BCD supertree as a starting tree for a combined Maximum Likelihood analysis using RAxML, we reach significantly improved accuracy (1% higher F1 score) and running time (1.7-fold speedup). © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  1. On König's root finding algorithms

    DEFF Research Database (Denmark)

    Buff, Xavier; Henriksen, Christian

    2003-01-01

    In this paper, we first recall the definition of a family of root-finding algorithms known as König's algorithms. We establish some local and some global properties of those algorithms. We give a characterization of rational maps which arise as König's methods of polynomials with simple roots. We...

  2. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    Science.gov (United States)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and

  3. The MUSIC algorithm for sparse objects: a compressed sensing analysis

    International Nuclear Information System (INIS)

    Fannjiang, Albert C

    2011-01-01

    The multiple signal classification (MUSIC) algorithm, and its extension for imaging sparse extended objects, with noisy data is analyzed by compressed sensing (CS) techniques. A thresholding rule is developed to augment the standard MUSIC algorithm. The notion of restricted isometry property (RIP) and an upper bound on the restricted isometry constant (RIC) are employed to establish sufficient conditions for the exact localization by MUSIC with or without noise. In the noiseless case, the sufficient condition gives an upper bound on the numbers of random sampling and incident directions necessary for exact localization. In the noisy case, the sufficient condition assumes additionally an upper bound for the noise-to-object ratio in terms of the RIC and the dynamic range of objects. This bound points to the super-resolution capability of the MUSIC algorithm. Rigorous comparison of performance between MUSIC and the CS minimization principle, basis pursuit denoising (BPDN), is given. In general, the MUSIC algorithm guarantees to recover, with high probability, s scatterers with n=O(s 2 ) random sampling and incident directions and sufficiently high frequency. For the favorable imaging geometry where the scatterers are distributed on a transverse plane MUSIC guarantees to recover, with high probability, s scatterers with a median frequency and n=O(s) random sampling/incident directions. Moreover, for the problems of spectral estimation and source localizations both BPDN and MUSIC guarantee, with high probability, to identify exactly the frequencies of random signals with the number n=O(s) of sampling times. However, in the absence of abundant realizations of signals, BPDN is the preferred method for spectral estimation. Indeed, BPDN can identify the frequencies approximately with just one realization of signals with the recovery error at worst linearly proportional to the noise level. Numerical results confirm that BPDN outperforms MUSIC in the well-resolved case while

  4. Beamforming with a circular microphone array for localization of environmental noise sources

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn; Fernandez Grande, Efren

    2010-01-01

    It is often enough to localize environmental sources of noise from different directions in a plane. This can be accomplished with a circular microphone array, which can be designed to have practically the same resolution over 360. The microphones can be suspended in free space or they can...

  5. When Gravity Fails: Local Search Topology

    Science.gov (United States)

    Frank, Jeremy; Cheeseman, Peter; Stutz, John; Lau, Sonie (Technical Monitor)

    1997-01-01

    Local search algorithms for combinatorial search problems frequently encounter a sequence of states in which it is impossible to improve the value of the objective function; moves through these regions, called {\\em plateau moves), dominate the time spent in local search. We analyze and characterize {\\em plateaus) for three different classes of randomly generated Boolean Satisfiability problems. We identify several interesting features of plateaus that impact the performance of local search algorithms. We show that local minima tend to be small but occasionally may be very large. We also show that local minima can be escaped without unsatisfying a large number of clauses, but that systematically searching for an escape route may be computationally expensive if the local minimum is large. We show that plateaus with exits, called benches, tend to be much larger than minima, and that some benches have very few exit states which local search can use to escape. We show that the solutions (i.e. global minima) of randomly generated problem instances form clusters, which behave similarly to local minima. We revisit several enhancements of local search algorithms and explain their performance in light of our results. Finally we discuss strategies for creating the next generation of local search algorithms.

  6. Explosion localization and characterization via infrasound using numerical modeling

    Science.gov (United States)

    Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.

    2017-12-01

    Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging

  7. Moving source localization with a single hydrophone using multipath time delays in the deep ocean.

    Science.gov (United States)

    Duan, Rui; Yang, Kunde; Ma, Yuanliang; Yang, Qiulong; Li, Hui

    2014-08-01

    Localizing a source of radial movement at moderate range using a single hydrophone can be achieved in the reliable acoustic path by tracking the time delays between the direct and surface-reflected arrivals (D-SR time delays). The problem is defined as a joint estimation of the depth, initial range, and speed of the source, which are the state parameters for the extended Kalman filter (EKF). The D-SR time delays extracted from the autocorrelation functions are the measurements for the EKF. Experimental results using pseudorandom signals show that accurate localization results are achieved by offline iteration of the EKF.

  8. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    Directory of Open Access Journals (Sweden)

    Fang Li

    2013-10-01

    Full Text Available This paper proposes an approach for acoustic emission (AE source localization in a large marble stone using distributed feedback (DFB fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location.

  9. PM(10) episodes in Greece: Local sources versus long-range transport-observations and model simulations.

    Science.gov (United States)

    Matthaios, Vasileios N; Triantafyllou, Athanasios G; Koutrakis, Petros

    2017-01-01

    Periods of abnormally high concentrations of atmospheric pollutants, defined as air pollution episodes, can cause adverse health effects. Southern European countries experience high particulate matter (PM) levels originating from local and distant sources. In this study, we investigated the occurrence and nature of extreme PM 10 (PM with an aerodynamic diameter ≤10 μm) pollution episodes in Greece. We examined PM 10 concentration data from 18 monitoring stations located at five sites across the country: (1) an industrial area in northwestern Greece (Western Macedonia Lignite Area, WMLA), which includes sources such as lignite mining operations and lignite power plants that generate a high percentage of the energy in Greece; (2) the greater Athens area, the most populated area of the country; and (3) Thessaloniki, (4) Patra, and (5) Volos, three large cities in Greece. We defined extreme PM 10 pollution episodes (EEs) as days during which PM 10 concentrations at all five sites exceeded the European Union (EU) 24-hr PM 10 standards. For each EE, we identified the corresponding prevailing synoptic and local meteorological conditions, including wind surface data, for the period from January 2009 through December 2011. We also analyzed data from remote sensing and model simulations. We recorded 14 EEs that occurred over 49 days and could be grouped into two categories: (1) Local Source Impact (LSI; 26 days, 53%) and (2) African Dust Impact (ADI; 23 days, 47%). Our analysis suggested that the contribution of local sources to ADI EEs was relatively small. LSI EEs were observed only in the cold season, whereas ADI EEs occurred throughout the year, with a higher frequency during the cold season. The EEs with the highest intensity were recorded during African dust intrusions. ADI episodes were found to contribute more than local sources in Greece, with ADI and LSI fraction contribution ranging from 1.1 to 3.10. The EE contribution during ADI fluctuated from 41 to 83

  10. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm

    International Nuclear Information System (INIS)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F.

    2011-01-01

    automatic method to localize radio-opaque applicators of arbitrary shape from measured 2D x-ray projections. The results demonstrate ∼1 mm accuracy while compared against the measured applicator projections. No lateral film is needed. By localizing the applicator internal structure as well as radioactive sources, the effect of intra-applicator and interapplicator attenuation can be included in the resultant dose calculations. Further validation tests using clinically acquired tandem and colpostats images will be performed for the accurate and robust applicator/sources localization in ICB patients.

  11. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2011-02-15

    , fast, and completely automatic method to localize radio-opaque applicators of arbitrary shape from measured 2D x-ray projections. The results demonstrate {approx}1 mm accuracy while compared against the measured applicator projections. No lateral film is needed. By localizing the applicator internal structure as well as radioactive sources, the effect of intra-applicator and interapplicator attenuation can be included in the resultant dose calculations. Further validation tests using clinically acquired tandem and colpostats images will be performed for the accurate and robust applicator/sources localization in ICB patients.

  12. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm.

    Science.gov (United States)

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2011-02-01

    localize radio-opaque applicators of arbitrary shape from measured 2D x-ray projections. The results demonstrate approximately 1 mm accuracy while compared against the measured applicator projections. No lateral film is needed. By localizing the applicator internal structure as well as radioactive sources, the effect of intra-applicator and interapplicator attenuation can be included in the resultant dose calculations. Further validation tests using clinically acquired tandem and colpostats images will be performed for the accurate and robust applicator/sources localization in ICB patients.

  13. Development of CD3 cell quantitation algorithms for renal allograft biopsy rejection assessment utilizing open source image analysis software.

    Science.gov (United States)

    Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad

    2018-02-01

    Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.

  14. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Science.gov (United States)

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  15. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Directory of Open Access Journals (Sweden)

    Jianzhong Wang

    Full Text Available Recently, Sparse Representation-based Classification (SRC has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW demonstrate the effectiveness of LCJDSRC.

  16. Structural Health Monitoring of Wind Turbine Blades: Acoustic Source Localization Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Omar Mabrok Bouzid

    2015-01-01

    Full Text Available Structural health monitoring (SHM is important for reducing the maintenance and operation cost of safety-critical components and systems in offshore wind turbines. This paper proposes an in situ wireless SHM system based on an acoustic emission (AE technique. By using this technique a number of challenges are introduced due to high sampling rate requirements, limitations in the communication bandwidth, memory space, and power resources. To overcome these challenges, this paper focused on two elements: (1 the use of an in situ wireless SHM technique in conjunction with the utilization of low sampling rates; (2 localization of acoustic sources which could emulate impact damage or audible cracks caused by different objects, such as tools, bird strikes, or strong hail, all of which represent abrupt AE events and could affect the structural health of a monitored wind turbine blade. The localization process is performed using features extracted from aliased AE signals based on a developed constraint localization model. To validate the performance of these elements, the proposed system was tested by testing the localization of the emulated AE sources acquired in the field.

  17. Physics-based approach to chemical source localization using mobile robotic swarms

    Science.gov (United States)

    Zarzhitsky, Dimitri

    2008-07-01

    Recently, distributed computation has assumed a dominant role in the fields of artificial intelligence and robotics. To improve system performance, engineers are combining multiple cooperating robots into cohesive collectives called swarms. This thesis illustrates the application of basic principles of physicomimetics, or physics-based design, to swarm robotic systems. Such principles include decentralized control, short-range sensing and low power consumption. We show how the application of these principles to robotic swarms results in highly scalable, robust, and adaptive multi-robot systems. The emergence of these valuable properties can be predicted with the help of well-developed theoretical methods. In this research effort, we have designed and constructed a distributed physicomimetics system for locating sources of airborne chemical plumes. This task, called chemical plume tracing (CPT), is receiving a great deal of attention due to persistent homeland security threats. For this thesis, we have created a novel CPT algorithm called fluxotaxis that is based on theoretical principles of fluid dynamics. Analytically, we show that fluxotaxis combines the essence, as well as the strengths, of the two most popular biologically-inspired CPT methods-- chemotaxis and anemotaxis. The chemotaxis strategy consists of navigating in the direction of the chemical density gradient within the plume, while the anemotaxis approach is based on an upwind traversal of the chemical cloud. Rigorous and extensive experimental evaluations have been performed in simulated chemical plume environments. Using a suite of performance metrics that capture the salient aspects of swarm-specific behavior, we have been able to evaluate and compare the three CPT algorithms. We demonstrate the improved performance of our fluxotaxis approach over both chemotaxis and anemotaxis in these realistic simulation environments, which include obstacles. To test our understanding of CPT on actual hardware

  18. MODA: an efficient algorithm for network motif discovery in biological networks.

    Science.gov (United States)

    Omidi, Saeed; Schreiber, Falk; Masoudi-Nejad, Ali

    2009-10-01

    In recent years, interest has been growing in the study of complex networks. Since Erdös and Rényi (1960) proposed their random graph model about 50 years ago, many researchers have investigated and shaped this field. Many indicators have been proposed to assess the global features of networks. Recently, an active research area has developed in studying local features named motifs as the building blocks of networks. Unfortunately, network motif discovery is a computationally hard problem and finding rather large motifs (larger than 8 nodes) by means of current algorithms is impractical as it demands too much computational effort. In this paper, we present a new algorithm (MODA) that incorporates techniques such as a pattern growth approach for extracting larger motifs efficiently. We have tested our algorithm and found it able to identify larger motifs with more than 8 nodes more efficiently than most of the current state-of-the-art motif discovery algorithms. While most of the algorithms rely on induced subgraphs as motifs of the networks, MODA is able to extract both induced and non-induced subgraphs simultaneously. The MODA source code is freely available at: http://LBB.ut.ac.ir/Download/LBBsoft/MODA/

  19. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  20. A geometrical perspective on localization

    NARCIS (Netherlands)

    Dulman, S.O.; Baggio, A.; Havinga, Paul J.M.; Langendoen, K.G.; Zhang, Ying; Ye, Yinyu

    2008-01-01

    A large number of localization algorithms for wireless sensor networks (WSNs) are evaluated against the Cramer-Rao Bound (CRB) as an indicator of how good the algorithm performs. The CRB defines the lower bound on the precision of an unbiased localization estimator. The CRB concept, borrowed from